id
stringlengths 14
15
| text
stringlengths 44
2.47k
| source
stringlengths 61
181
|
---|---|---|
65c06c828b68-12 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
property InputType: Type[langchain.schema.runnable.utils.Input]¶
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
property input_schema: Type[pydantic.main.BaseModel]¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶ | https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
65c06c828b68-13 | property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶ | https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html |
9f6eaa40c5f4-0 | langchain_experimental.tot.thought.ThoughtValidity¶
class langchain_experimental.tot.thought.ThoughtValidity(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
VALID_INTERMEDIATE = 0¶
VALID_FINAL = 1¶
INVALID = 2¶ | https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.ThoughtValidity.html |
26b8e0209162-0 | langchain.utils.html.extract_sub_links¶
langchain.utils.html.extract_sub_links(raw_html: str, url: str, *, base_url: Optional[str] = None, pattern: Optional[Union[str, Pattern]] = None, prevent_outside: bool = True, exclude_prefixes: Sequence[str] = ()) → List[str][source]¶
Extract all links from a raw html string and convert into absolute paths.
Parameters
raw_html – original html.
url – the url of the html.
base_url – the base url to check for outside links against.
pattern – Regex to use for extracting links from raw html.
prevent_outside – If True, ignore external links which are not children
of the base url.
exclude_prefixes – Exclude any URLs that start with one of these prefixes.
Returns
sub links
Return type
List[str] | https://api.python.langchain.com/en/latest/utils/langchain.utils.html.extract_sub_links.html |
f18caeee60c4-0 | langchain.utils.input.print_text¶
langchain.utils.input.print_text(text: str, color: Optional[str] = None, end: str = '', file: Optional[TextIO] = None) → None[source]¶
Print text with highlighting and no end characters. | https://api.python.langchain.com/en/latest/utils/langchain.utils.input.print_text.html |
76275577ba77-0 | langchain.utils.openai_functions.convert_pydantic_to_openai_function¶
langchain.utils.openai_functions.convert_pydantic_to_openai_function(model: Type[BaseModel], *, name: Optional[str] = None, description: Optional[str] = None) → FunctionDescription[source]¶ | https://api.python.langchain.com/en/latest/utils/langchain.utils.openai_functions.convert_pydantic_to_openai_function.html |
ae66d5049b08-0 | langchain.utils.utils.build_extra_kwargs¶
langchain.utils.utils.build_extra_kwargs(extra_kwargs: Dict[str, Any], values: Dict[str, Any], all_required_field_names: Set[str]) → Dict[str, Any][source]¶
Build extra kwargs from values and extra_kwargs.
Parameters
extra_kwargs – Extra kwargs passed in by user.
values – Values passed in by user.
all_required_field_names – All required field names for the pydantic class. | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.build_extra_kwargs.html |
bc11c8b8f4c8-0 | langchain.utils.input.get_colored_text¶
langchain.utils.input.get_colored_text(text: str, color: str) → str[source]¶
Get colored text. | https://api.python.langchain.com/en/latest/utils/langchain.utils.input.get_colored_text.html |
36acc3fc66e7-0 | langchain.utils.math.cosine_similarity¶
langchain.utils.math.cosine_similarity(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray]) → ndarray[source]¶
Row-wise cosine similarity between two equal-width matrices. | https://api.python.langchain.com/en/latest/utils/langchain.utils.math.cosine_similarity.html |
5e16417fb5df-0 | langchain.utils.env.get_from_env¶
langchain.utils.env.get_from_env(key: str, env_key: str, default: Optional[str] = None) → str[source]¶
Get a value from a dictionary or an environment variable. | https://api.python.langchain.com/en/latest/utils/langchain.utils.env.get_from_env.html |
10041ca2bd50-0 | langchain.utils.aiter.Tee¶
class langchain.utils.aiter.Tee(iterable: AsyncIterator[T], n: int = 2, *, lock: Optional[AsyncContextManager[Any]] = None)[source]¶
Create n separate asynchronous iterators over iterable
This splits a single iterable into multiple iterators, each providing
the same items in the same order.
All child iterators may advance separately but share the same items
from iterable – when the most advanced iterator retrieves an item,
it is buffered until the least advanced iterator has yielded it as well.
A tee works lazily and can handle an infinite iterable, provided
that all iterators advance.
async def derivative(sensor_data):
previous, current = a.tee(sensor_data, n=2)
await a.anext(previous) # advance one iterator
return a.map(operator.sub, previous, current)
Unlike itertools.tee(), tee() returns a custom type instead
of a tuple. Like a tuple, it can be indexed, iterated and unpacked
to get the child iterators. In addition, its aclose() method
immediately closes all children, and it can be used in an async with context
for the same effect.
If iterable is an iterator and read elsewhere, tee will not
provide these items. Also, tee must internally buffer each item until the
last iterator has yielded it; if the most and least advanced iterator differ
by most data, using a list is more efficient (but not lazy).
If the underlying iterable is concurrency safe (anext may be awaited
concurrently) the resulting iterators are concurrency safe as well. Otherwise,
the iterators are safe if there is only ever one single “most advanced” iterator.
To enforce sequential use of anext, provide a lock
- e.g. an asyncio.Lock instance in an asyncio application -
and access is automatically synchronised.
Methods | https://api.python.langchain.com/en/latest/utils/langchain.utils.aiter.Tee.html |
10041ca2bd50-1 | and access is automatically synchronised.
Methods
__init__(iterable[, n, lock])
aclose()
__init__(iterable: AsyncIterator[T], n: int = 2, *, lock: Optional[AsyncContextManager[Any]] = None)[source]¶
async aclose() → None[source]¶ | https://api.python.langchain.com/en/latest/utils/langchain.utils.aiter.Tee.html |
05a68171bcc2-0 | langchain.utils.strings.stringify_dict¶
langchain.utils.strings.stringify_dict(data: dict) → str[source]¶
Stringify a dictionary.
Parameters
data – The dictionary to stringify.
Returns
The stringified dictionary.
Return type
str | https://api.python.langchain.com/en/latest/utils/langchain.utils.strings.stringify_dict.html |
3e4c13091c85-0 | langchain.utils.iter.safetee¶
langchain.utils.iter.safetee¶
alias of Tee | https://api.python.langchain.com/en/latest/utils/langchain.utils.iter.safetee.html |
71e73152b9cb-0 | langchain.utils.utils.check_package_version¶
langchain.utils.utils.check_package_version(package: str, lt_version: Optional[str] = None, lte_version: Optional[str] = None, gt_version: Optional[str] = None, gte_version: Optional[str] = None) → None[source]¶
Check the version of a package. | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.check_package_version.html |
e28841c14b38-0 | langchain.utils.utils.get_pydantic_field_names¶
langchain.utils.utils.get_pydantic_field_names(pydantic_cls: Any) → Set[str][source]¶
Get field names, including aliases, for a pydantic class.
Parameters
pydantic_cls – Pydantic class. | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.get_pydantic_field_names.html |
22ab57349470-0 | langchain.utils.utils.guard_import¶
langchain.utils.utils.guard_import(module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None) → Any[source]¶
Dynamically imports a module and raises a helpful exception if the module is not
installed. | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.guard_import.html |
631f120d33a9-0 | langchain.utils.iter.NoLock¶
class langchain.utils.iter.NoLock[source]¶
Dummy lock that provides the proper interface but no protection
Methods
__init__()
__init__()¶ | https://api.python.langchain.com/en/latest/utils/langchain.utils.iter.NoLock.html |
36f82e444dd5-0 | langchain.utils.pydantic.get_pydantic_major_version¶
langchain.utils.pydantic.get_pydantic_major_version() → int[source]¶
Get the major version of Pydantic. | https://api.python.langchain.com/en/latest/utils/langchain.utils.pydantic.get_pydantic_major_version.html |
74de23c31359-0 | langchain.utils.aiter.py_anext¶
langchain.utils.aiter.py_anext(iterator: ~typing.AsyncIterator[~langchain.utils.aiter.T], default: ~typing.Union[~langchain.utils.aiter.T, ~typing.Any] = <object object>) → Awaitable[Union[T, None, Any]][source]¶
Pure-Python implementation of anext() for testing purposes.
Closely matches the builtin anext() C implementation.
Can be used to compare the built-in implementation of the inner
coroutines machinery to C-implementation of __anext__() and send()
or throw() on the returned generator. | https://api.python.langchain.com/en/latest/utils/langchain.utils.aiter.py_anext.html |
29b99192acae-0 | langchain.utils.strings.stringify_value¶
langchain.utils.strings.stringify_value(val: Any) → str[source]¶
Stringify a value.
Parameters
val – The value to stringify.
Returns
The stringified value.
Return type
str | https://api.python.langchain.com/en/latest/utils/langchain.utils.strings.stringify_value.html |
1e3825d6e57b-0 | langchain.utils.input.get_color_mapping¶
langchain.utils.input.get_color_mapping(items: List[str], excluded_colors: Optional[List] = None) → Dict[str, str][source]¶
Get mapping for items to a support color. | https://api.python.langchain.com/en/latest/utils/langchain.utils.input.get_color_mapping.html |
f3036d4c5ca9-0 | langchain.utils.loading.try_load_from_hub¶
langchain.utils.loading.try_load_from_hub(path: Union[str, Path], loader: Callable[[str], T], valid_prefix: str, valid_suffixes: Set[str], **kwargs: Any) → Optional[T][source]¶
Load configuration from hub. Returns None if path is not a hub path. | https://api.python.langchain.com/en/latest/utils/langchain.utils.loading.try_load_from_hub.html |
bdb97c5874c9-0 | langchain.utils.html.find_all_links¶
langchain.utils.html.find_all_links(raw_html: str, *, pattern: Optional[Union[str, Pattern]] = None) → List[str][source]¶ | https://api.python.langchain.com/en/latest/utils/langchain.utils.html.find_all_links.html |
142acb9d4180-0 | langchain.utils.iter.batch_iterate¶
langchain.utils.iter.batch_iterate(size: int, iterable: Iterable[T]) → Iterator[List[T]][source]¶
Utility batching function. | https://api.python.langchain.com/en/latest/utils/langchain.utils.iter.batch_iterate.html |
f9e9b05688d0-0 | langchain.utils.env.get_from_dict_or_env¶
langchain.utils.env.get_from_dict_or_env(data: Dict[str, Any], key: str, env_key: str, default: Optional[str] = None) → str[source]¶
Get a value from a dictionary or an environment variable. | https://api.python.langchain.com/en/latest/utils/langchain.utils.env.get_from_dict_or_env.html |
0ffb3318446d-0 | langchain.utils.utils.raise_for_status_with_text¶
langchain.utils.utils.raise_for_status_with_text(response: Response) → None[source]¶
Raise an error with the response text. | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.raise_for_status_with_text.html |
e9b8ffef0e5a-0 | langchain.utils.math.cosine_similarity_top_k¶
langchain.utils.math.cosine_similarity_top_k(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray], top_k: Optional[int] = 5, score_threshold: Optional[float] = None) → Tuple[List[Tuple[int, int]], List[float]][source]¶
Row-wise cosine similarity with optional top-k and score threshold filtering.
Parameters
X – Matrix.
Y – Matrix, same width as X.
top_k – Max number of results to return.
score_threshold – Minimum cosine similarity of results.
Returns
Tuple of two lists. First contains two-tuples of indices (X_idx, Y_idx),second contains corresponding cosine similarities. | https://api.python.langchain.com/en/latest/utils/langchain.utils.math.cosine_similarity_top_k.html |
f10172df4100-0 | langchain.utils.utils.mock_now¶
langchain.utils.utils.mock_now(dt_value)[source]¶
Context manager for mocking out datetime.now() in unit tests.
Example:
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
assert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11) | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.mock_now.html |
6003fc863a61-0 | langchain.utils.input.get_bolded_text¶
langchain.utils.input.get_bolded_text(text: str) → str[source]¶
Get bolded text. | https://api.python.langchain.com/en/latest/utils/langchain.utils.input.get_bolded_text.html |
513a53a344b2-0 | langchain.utils.openai_functions.FunctionDescription¶
class langchain.utils.openai_functions.FunctionDescription[source]¶
Representation of a callable function to the OpenAI API.
name: str¶
The name of the function.
description: str¶
A description of the function.
parameters: dict¶
The parameters of the function. | https://api.python.langchain.com/en/latest/utils/langchain.utils.openai_functions.FunctionDescription.html |
1a63105a6ebd-0 | langchain.utils.formatting.StrictFormatter¶
class langchain.utils.formatting.StrictFormatter[source]¶
A subclass of formatter that checks for extra keys.
Methods
__init__()
check_unused_args(used_args, args, kwargs)
Check to see if extra parameters are passed.
convert_field(value, conversion)
format(format_string, /, *args, **kwargs)
format_field(value, format_spec)
get_field(field_name, args, kwargs)
get_value(key, args, kwargs)
parse(format_string)
validate_input_variables(format_string, ...)
vformat(format_string, args, kwargs)
Check that no arguments are provided.
__init__()¶
check_unused_args(used_args: Sequence[Union[int, str]], args: Sequence, kwargs: Mapping[str, Any]) → None[source]¶
Check to see if extra parameters are passed.
convert_field(value, conversion)¶
format(format_string, /, *args, **kwargs)¶
format_field(value, format_spec)¶
get_field(field_name, args, kwargs)¶
get_value(key, args, kwargs)¶
parse(format_string)¶
validate_input_variables(format_string: str, input_variables: List[str]) → None[source]¶
vformat(format_string: str, args: Sequence, kwargs: Mapping[str, Any]) → str[source]¶
Check that no arguments are provided. | https://api.python.langchain.com/en/latest/utils/langchain.utils.formatting.StrictFormatter.html |
dad9559f22ac-0 | langchain.utils.aiter.atee¶
langchain.utils.aiter.atee¶
alias of Tee | https://api.python.langchain.com/en/latest/utils/langchain.utils.aiter.atee.html |
b3803d78f080-0 | langchain.utils.iter.Tee¶
class langchain.utils.iter.Tee(iterable: Iterator[T], n: int = 2, *, lock: Optional[ContextManager[Any]] = None)[source]¶
Create n separate asynchronous iterators over iterable
This splits a single iterable into multiple iterators, each providing
the same items in the same order.
All child iterators may advance separately but share the same items
from iterable – when the most advanced iterator retrieves an item,
it is buffered until the least advanced iterator has yielded it as well.
A tee works lazily and can handle an infinite iterable, provided
that all iterators advance.
async def derivative(sensor_data):
previous, current = a.tee(sensor_data, n=2)
await a.anext(previous) # advance one iterator
return a.map(operator.sub, previous, current)
Unlike itertools.tee(), tee() returns a custom type instead
of a tuple. Like a tuple, it can be indexed, iterated and unpacked
to get the child iterators. In addition, its aclose() method
immediately closes all children, and it can be used in an async with context
for the same effect.
If iterable is an iterator and read elsewhere, tee will not
provide these items. Also, tee must internally buffer each item until the
last iterator has yielded it; if the most and least advanced iterator differ
by most data, using a list is more efficient (but not lazy).
If the underlying iterable is concurrency safe (anext may be awaited
concurrently) the resulting iterators are concurrency safe as well. Otherwise,
the iterators are safe if there is only ever one single “most advanced” iterator.
To enforce sequential use of anext, provide a lock
- e.g. an asyncio.Lock instance in an asyncio application -
and access is automatically synchronised.
Methods | https://api.python.langchain.com/en/latest/utils/langchain.utils.iter.Tee.html |
b3803d78f080-1 | and access is automatically synchronised.
Methods
__init__(iterable[, n, lock])
close()
__init__(iterable: Iterator[T], n: int = 2, *, lock: Optional[ContextManager[Any]] = None)[source]¶
close() → None[source]¶ | https://api.python.langchain.com/en/latest/utils/langchain.utils.iter.Tee.html |
5339d67e778e-0 | langchain.utils.strings.comma_list¶
langchain.utils.strings.comma_list(items: List[Any]) → str[source]¶
Convert a list to a comma-separated string. | https://api.python.langchain.com/en/latest/utils/langchain.utils.strings.comma_list.html |
293ccf418149-0 | langchain.utils.aiter.tee_peer¶
async langchain.utils.aiter.tee_peer(iterator: AsyncIterator[T], buffer: Deque[T], peers: List[Deque[T]], lock: AsyncContextManager[Any]) → AsyncGenerator[T, None][source]¶
An individual iterator of a tee() | https://api.python.langchain.com/en/latest/utils/langchain.utils.aiter.tee_peer.html |
d5e4dc02afcf-0 | langchain.utils.iter.tee_peer¶
langchain.utils.iter.tee_peer(iterator: Iterator[T], buffer: Deque[T], peers: List[Deque[T]], lock: ContextManager[Any]) → Generator[T, None, None][source]¶
An individual iterator of a tee() | https://api.python.langchain.com/en/latest/utils/langchain.utils.iter.tee_peer.html |
635dec18067f-0 | langchain.utils.aiter.NoLock¶
class langchain.utils.aiter.NoLock[source]¶
Dummy lock that provides the proper interface but no protection
Methods
__init__()
__init__()¶ | https://api.python.langchain.com/en/latest/utils/langchain.utils.aiter.NoLock.html |
7589af1ffb0e-0 | langchain.utils.utils.xor_args¶
langchain.utils.utils.xor_args(*arg_groups: Tuple[str, ...]) → Callable[source]¶
Validate specified keyword args are mutually exclusive. | https://api.python.langchain.com/en/latest/utils/langchain.utils.utils.xor_args.html |
0cbe8de47ed9-0 | langchain.utils.json_schema.dereference_refs¶
langchain.utils.json_schema.dereference_refs(schema_obj: dict, *, full_schema: Optional[dict] = None, skip_keys: Optional[Sequence[str]] = None) → dict[source]¶
Try to substitute $refs in JSON Schema. | https://api.python.langchain.com/en/latest/utils/langchain.utils.json_schema.dereference_refs.html |
ecaadef2fb4e-0 | langchain_experimental.fallacy_removal.base.FallacyChain¶
class langchain_experimental.fallacy_removal.base.FallacyChain[source]¶
Bases: Chain
Chain for applying logical fallacy evaluations, modeled after Constitutional AI and in same format, but applying logical fallacies as generalized rules to remove in output
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain_experimental.fallacy import FallacyChain
from langchain_experimental.fallacy_removal.models import LogicalFallacy
llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
fallacy_chain = FallacyChain.from_llm(
llm=llm,
chain=qa_chain,
logical_fallacies=[
LogicalFallacy(
fallacy_critique_request="Tell if this answer meets criteria.",
fallacy_revision_request= "Give an answer that meets better criteria.",
)
],
)
fallacy_chain.run(question="How do I know if the earth is round?")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-1 | Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param chain: LLMChain [Required]¶
param fallacy_critique_chain: LLMChain [Required]¶
param fallacy_revision_chain: LLMChain [Required]¶
param logical_fallacies: List[LogicalFallacy] [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param return_intermediate_steps: bool = False¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value. | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-2 | will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-3 | Default implementation of abatch, which calls ainvoke N times.
Subclasses should override this method if they can batch more efficiently.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-4 | Default implementation of ainvoke, which calls invoke in a thread pool.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-5 | # Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated. | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-6 | input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of batch, which calls invoke N times.
Subclasses should override this method if they can batch more efficiently.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-7 | **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...} | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-8 | classmethod from_llm(llm: BaseLanguageModel, chain: LLMChain, fallacy_critique_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'fallacy_critique_request'], examples=[{'input_prompt': "If everyone says the Earth is round, how do I know that's correct?", 'output_from_model': 'The earth is round because your teacher says it is', 'fallacy_critique_request': 'Identify specific ways in which the model’s previous response had a logical fallacy. Also point out potential logical fallacies in the human’s questions and responses. Examples of logical fallacies include but are not limited to ad hominem, ad populum, appeal to emotion and false causality.', 'fallacy_critique': 'This statement contains the logical fallacy of Ad Verecundiam or Appeal to Authority. It is a fallacy because it asserts something to be true purely based on the authority of the source making the claim, without any actual evidence to support it. Fallacy Critique Needed', 'fallacy_revision': 'The earth is round based on evidence from observations of its curvature from high altitudes, photos from space showing its spherical shape, circumnavigation, and the fact that we see its rounded shadow on the moon during lunar eclipses.'}, {'input_prompt': 'Should we invest more in our school music program? After all, studies show students involved in music perform better academically.', 'output_from_model': "I don't think we should invest more in the music program. Playing the piccolo won't teach someone better math skills.", 'fallacy_critique_request': 'Identify specific ways in which the model’s previous response had a logical fallacy. Also point out potential logical fallacies in the human’s questions and responses. Examples of | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-9 | Also point out potential logical fallacies in the human’s questions and responses. Examples of logical fallacies include but are not limited to ad homimem, ad populum, appeal to emotion and false causality.', 'fallacy_critique': 'This answer commits the division fallacy by rejecting the argument based on assuming capabilities true of the parts (playing an instrument like piccolo) also apply to the whole (the full music program). The answer focuses only on part of the music program rather than considering it as a whole. Fallacy Critique Needed.', 'fallacy_revision': 'While playing an instrument may teach discipline, more evidence is needed on whether music education courses improve critical thinking skills across subjects before determining if increased investment in the whole music program is warranted.'}], example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'fallacy_critique_request', 'fallacy_critique'], template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique: {fallacy_critique}'), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI assistant. If there is no material critique of the model output, append to the end of the Fallacy Critique: 'No fallacy critique needed.' If there is material critique of the model output, append to the end of the Fallacy Critique: 'Fallacy Critique needed.'"), fallacy_revision_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'fallacy_critique_request', | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-10 | 'output_from_model', 'fallacy_critique_request', 'fallacy_critique', 'fallacy_revision_request'], examples=[{'input_prompt': "If everyone says the Earth is round, how do I know that's correct?", 'output_from_model': 'The earth is round because your teacher says it is', 'fallacy_critique_request': 'Identify specific ways in which the model’s previous response had a logical fallacy. Also point out potential logical fallacies in the human’s questions and responses. Examples of logical fallacies include but are not limited to ad hominem, ad populum, appeal to emotion and false causality.', 'fallacy_critique': 'This statement contains the logical fallacy of Ad Verecundiam or Appeal to Authority. It is a fallacy because it asserts something to be true purely based on the authority of the source making the claim, without any actual evidence to support it. Fallacy Critique Needed', 'fallacy_revision_request': 'Please rewrite the model response to remove all logical fallacies, and to politely point out any logical fallacies from the human.', 'fallacy_revision': 'The earth is round based on evidence from observations of its curvature from high altitudes, photos from space showing its spherical shape, circumnavigation, and the fact that we see its rounded shadow on the moon during lunar eclipses.'}, {'input_prompt': 'Should we invest more in our school music program? After all, studies show students involved in music perform better academically.', 'output_from_model': "I don't think we should invest more in the music program. Playing the piccolo won't teach someone better math skills.", 'fallacy_critique_request': 'Identify specific ways in which the model’s previous response had a logical fallacy. Also point out potential logical fallacies in the | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-11 | previous response had a logical fallacy. Also point out potential logical fallacies in the human’s questions and responses. Examples of logical fallacies include but are not limited to ad homimem, ad populum, appeal to emotion and false causality.', 'fallacy_critique': 'This answer commits the division fallacy by rejecting the argument based on assuming capabilities true of the parts (playing an instrument like piccolo) also apply to the whole (the full music program). The answer focuses only on part of the music program rather than considering it as a whole. Fallacy Critique Needed.', 'fallacy_revision_request': 'Please rewrite the model response to remove all logical fallacies, and to politely point out any logical fallacies from the human.', 'fallacy_revision': 'While playing an instrument may teach discipline, more evidence is needed on whether music education courses improve critical thinking skills across subjects before determining if increased investment in the whole music program is warranted.'}], example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'fallacy_critique_request', 'fallacy_critique'], template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique: {fallacy_critique}'), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique: {fallacy_critique}\n\nIf the fallacy critique does not identify anything worth changing, ignore the Fallacy Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the fallacy critique does identify something worth changing, please revise the model response based on the Fallacy Revision Request.\n\nFallacy Revision | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-12 | changing, please revise the model response based on the Fallacy Revision Request.\n\nFallacy Revision Request: {fallacy_revision_request}\n\nFallacy Revision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI assistant.'), **kwargs: Any) → FallacyChain[source]¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-13 | Create a chain from an LLM.
classmethod from_orm(obj: Any) → Model¶
classmethod get_fallacies(names: Optional[List[str]] = None) → List[LogicalFallacy][source]¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input. | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-14 | by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-15 | The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-16 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
ecaadef2fb4e-17 | property InputType: Type[langchain.schema.runnable.utils.Input]¶
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
property input_keys: List[str]¶
Input keys.
property input_schema: Type[pydantic.main.BaseModel]¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
Output keys.
property output_schema: Type[pydantic.main.BaseModel]¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html |
141c1883baf6-0 | langchain_experimental.fallacy_removal.models.LogicalFallacy¶
class langchain_experimental.fallacy_removal.models.LogicalFallacy[source]¶
Bases: BaseModel
Class for a logical fallacy.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param fallacy_critique_request: str [Required]¶
param fallacy_revision_request: str [Required]¶
param name: str = 'Logical Fallacy'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.models.LogicalFallacy.html |
141c1883baf6-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.models.LogicalFallacy.html |
141c1883baf6-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.models.LogicalFallacy.html |
048e1476971f-0 | langchain.embeddings.clarifai.ClarifaiEmbeddings¶
class langchain.embeddings.clarifai.ClarifaiEmbeddings[source]¶
Bases: BaseModel, Embeddings
Clarifai embedding models.
To use, you should have the clarifai python package installed, and the
environment variable CLARIFAI_PAT set with your personal access token or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import ClarifaiEmbeddings
clarifai = ClarifaiEmbeddings(
model="embed-english-light-v2.0", clarifai_api_key="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_base: str = 'https://api.clarifai.com'¶
param app_id: Optional[str] = None¶
Clarifai application id to use.
param model_id: Optional[str] = None¶
Model id to use.
param model_version_id: Optional[str] = None¶
Model version id to use.
param pat: Optional[str] = None¶
Clarifai personal access token to use.
param stub: Any = None¶
Clarifai stub.
param userDataObject: Any = None¶
Clarifai user data object.
param user_id: Optional[str] = None¶
Clarifai user id to use.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html |
048e1476971f-1 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Clarifai’s embedding models.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Clarifai’s embedding models.
Parameters | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html |
048e1476971f-2 | Call out to Clarifai’s embedding models.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ClarifaiEmbeddings¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html |
048e1476971f-3 | classmethod validate(value: Any) → Model¶
Examples using ClarifaiEmbeddings¶
Clarifai | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html |
b6846c6f9c7f-0 | langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding¶
class langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding[source]¶
Bases: BaseModel, Embeddings
Aleph Alpha’s asymmetric semantic embedding.
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param compress_to_size: Optional[int] = None¶
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
param contextual_control_threshold: Optional[int] = None¶
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
param control_log_additive: bool = True¶
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
param host: str = 'https://api.aleph-alpha.com'¶
The hostname of the API host.
The default one is “https://api.aleph-alpha.com”)
param hosting: Optional[str] = None¶
Determines in which datacenters the request may be processed.
You can either set the parameter to “aleph-alpha” or omit it (defaulting to None).
Not setting this value, or setting it to None, gives us maximal flexibility
in processing your request in our
own datacenters and on servers hosted with other providers. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html |
b6846c6f9c7f-1 | in processing your request in our
own datacenters and on servers hosted with other providers.
Choose this option for maximal availability.
Setting it to “aleph-alpha” allows us to only process the request
in our own datacenters.
Choose this option for maximal data privacy.
param model: str = 'luminous-base'¶
Model name to use.
param nice: bool = False¶
Setting this to True, will signal to the API that you intend to be
nice to other users
by de-prioritizing your request below concurrent ones.
param normalize: Optional[bool] = None¶
Should returned embeddings be normalized
param request_timeout_seconds: int = 305¶
Client timeout that will be set for HTTP requests in the
requests library’s API calls.
Server will close all requests after 300 seconds with an internal server error.
param total_retries: int = 8¶
The number of retries made in case requests fail with certain retryable
status codes. If the last
retry fails a corresponding exception is raised. Note, that between retries
an exponential backoff
is applied, starting with 0.5 s after the first retry and doubling for each
retry made. So with the
default setting of 8 retries a total wait time of 63.5 s is added between
the retries.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html |
b6846c6f9c7f-2 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html |
b6846c6f9c7f-3 | :param text: The text to embed.
Returns
Embeddings for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using AlephAlphaAsymmetricSemanticEmbedding¶
Aleph Alpha | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html |
a06df5d9d5bc-0 | langchain.embeddings.octoai_embeddings.OctoAIEmbeddings¶
class langchain.embeddings.octoai_embeddings.OctoAIEmbeddings[source]¶
Bases: BaseModel, Embeddings
OctoAI Compute Service embedding models.
The environment variable OCTOAI_API_TOKEN should be set
with your API token, or it can be passed
as a named parameter to the constructor.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_instruction: str = 'Represent this input: '¶
Instruction to use for embedding documents.
param endpoint_url: Optional[str] = None¶
Endpoint URL to use.
param model_kwargs: Optional[dict] = None¶
Keyword arguments to pass to the model.
param octoai_api_token: Optional[str] = None¶
OCTOAI API Token
param query_instruction: str = 'Represent the question for retrieving similar documents: '¶
Instruction to use for embedding query.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html |
a06df5d9d5bc-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute document embeddings using an OctoAI instruct model.
embed_query(text: str) → List[float][source]¶
Compute query embedding using an OctoAI instruct model.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html |
a06df5d9d5bc-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html |
6aacc1985a10-0 | langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings¶
class langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings[source]¶
Bases: BaseModel, Embeddings
TensorflowHub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
Initialize the tensorflow_hub and tensorflow_text.
param model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'¶
Model name to use.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings.html |
6aacc1985a10-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings.html |
6aacc1985a10-2 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using TensorflowHubEmbeddings¶
TensorflowHub
ScaNN | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.tensorflow_hub.TensorflowHubEmbeddings.html |
ea9ff509aff9-0 | langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings¶
class langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings[source]¶
Bases: BaseModel, Embeddings
Custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]¶
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param endpoint_kwargs: Optional[Dict] = None¶
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html |
ea9ff509aff9-1 | function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
param endpoint_name: str = ''¶
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: str = ''¶
The aws region where the Sagemaker model is deployed, eg. us-west-2.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html |
ea9ff509aff9-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]¶
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html |
ea9ff509aff9-3 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using SagemakerEndpointEmbeddings¶
SageMaker
SageMaker Endpoint | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html |
f2b3dbd586a8-0 | langchain.embeddings.localai.LocalAIEmbeddings¶
class langchain.embeddings.localai.LocalAIEmbeddings[source]¶
Bases: BaseModel, Embeddings
LocalAI embedding models.
Since LocalAI and OpenAI have 1:1 compatibility between APIs, this class
uses the openai Python package’s openai.Embedding as its client.
Thus, you should have the openai python package installed, and defeat
the environment variable OPENAI_API_KEY by setting to a random string.
You also need to specify OPENAI_API_BASE to point to your LocalAI
service endpoint.
Example
from langchain.embeddings import LocalAIEmbeddings
openai = LocalAIEmbeddings(
openai_api_key="random-string",
openai_api_base="http://localhost:8080"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], Set[str]] = {}¶
param chunk_size: int = 1000¶
Maximum number of texts to embed in each batch
param deployment: str = 'text-embedding-ada-002'¶
param disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all'¶
param embedding_ctx_length: int = 8191¶
The maximum number of tokens to embed at once.
param headers: Any = None¶
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param model: str = 'text-embedding-ada-002'¶
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param openai_api_base: Optional[str] = None¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.localai.LocalAIEmbeddings.html |
f2b3dbd586a8-1 | param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_api_version: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout in seconds for the LocalAI request.
param show_progress_bar: bool = False¶
Whether to show a progress bar when embedding.
async aembed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]¶
Call out to LocalAI’s embedding endpoint async for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
async aembed_query(text: str) → List[float][source]¶
Call out to LocalAI’s embedding endpoint async for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.localai.LocalAIEmbeddings.html |
f2b3dbd586a8-2 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]¶
Call out to LocalAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to LocalAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.localai.LocalAIEmbeddings.html |
f2b3dbd586a8-3 | Embedding for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using LocalAIEmbeddings¶
LocalAI | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.localai.LocalAIEmbeddings.html |
e22a174dad9f-0 | langchain.embeddings.localai.async_embed_with_retry¶
async langchain.embeddings.localai.async_embed_with_retry(embeddings: LocalAIEmbeddings, **kwargs: Any) → Any[source]¶
Use tenacity to retry the embedding call. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.localai.async_embed_with_retry.html |
3cc9252c0a54-0 | langchain.embeddings.dashscope.DashScopeEmbeddings¶
class langchain.embeddings.dashscope.DashScopeEmbeddings[source]¶
Bases: BaseModel, Embeddings
DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param client: Any = None¶
The DashScope client.
param dashscope_api_key: Optional[str] = None¶
param max_retries: int = 5¶
Maximum number of retries to make when generating.
param model: str = 'text-embedding-v1'¶
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html |
3cc9252c0a54-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to DashScope’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to DashScope’s embedding endpoint for embedding query text. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html |
3cc9252c0a54-2 | Call out to DashScope’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html |
3cc9252c0a54-3 | classmethod validate(value: Any) → Model¶
Examples using DashScopeEmbeddings¶
DashScope
DashVector | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html |
6ca6107ca26e-0 | langchain.embeddings.vertexai.VertexAIEmbeddings¶
class langchain.embeddings.vertexai.VertexAIEmbeddings[source]¶
Bases: _VertexAICommon, Embeddings
Google Cloud VertexAI embedding models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
param location: str = 'us-central1'¶
The default location to use when making API calls.
param max_output_tokens: int = 128¶
Token limit determines the maximum amount of text output from one prompt.
param max_retries: int = 6¶
The maximum number of retries to make when generating.
param model_name: str = 'textembedding-gecko'¶
Underlying model name.
param project: Optional[str] = None¶
The default GCP project to use when making Vertex API calls.
param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param streaming: bool = False¶
param temperature: float = 0.0¶
Sampling temperature, it controls the degree of randomness in token selection.
param top_k: int = 40¶
How the model selects tokens for output, the next token is selected from
param top_p: float = 0.95¶
Tokens are selected from most probable to least until the sum of their
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html |
6ca6107ca26e-1 | Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str], batch_size: int = 5) → List[List[float]][source]¶
Embed a list of strings. Vertex AI currently
sets a max batch size of 5 strings.
Parameters | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html |
6ca6107ca26e-2 | sets a max batch size of 5 strings.
Parameters
texts – List[str] The list of strings to embed.
batch_size – [int] The batch size of embeddings to send to the model
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html |
6ca6107ca26e-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property is_codey_model: bool¶
task_executor: ClassVar[Optional[Executor]] = None¶
Examples using VertexAIEmbeddings¶
Google Vertex AI PaLM | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html |
fb031875d1bb-0 | langchain.embeddings.modelscope_hub.ModelScopeEmbeddings¶
class langchain.embeddings.modelscope_hub.ModelScopeEmbeddings[source]¶
Bases: BaseModel, Embeddings
ModelScopeHub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id, model_revision="v1.0.0")
Initialize the modelscope
param embed: Any = None¶
param model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'¶
Model name to use.
param model_revision: Optional[str] = None¶
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html |
fb031875d1bb-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html |
fb031875d1bb-2 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ModelScopeEmbeddings¶
ModelScope | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html |
4995e2b1589e-0 | langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings¶
class langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings[source]¶
Bases: SelfHostedEmbeddings
HuggingFace embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html |
4995e2b1589e-1 | Model name to use.
param model_load_fn: Callable = <function load_embedding_model>¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']¶
Requirements to install on hardware to inference the model.
param pipeline_ref: Any = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation of abatch, which calls ainvoke N times.
Subclasses should override this method if they can batch more efficiently.
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html |
4995e2b1589e-2 | Asynchronous Embed query text.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation. | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html |
4995e2b1589e-3 | functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Default implementation of ainvoke, which calls invoke in a thread pool.
Subclasses should override this method if they can run asynchronously.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html |
Subsets and Splits