id
stringlengths 14
15
| text
stringlengths 44
2.47k
| source
stringlengths 61
181
|
---|---|---|
9b6f65715ed0-5 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.minimax.MiniMaxChat.html |
9b6f65715ed0-6 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
classmethod is_lc_serializable() → bool¶
Is this class serializable? | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.minimax.MiniMaxChat.html |
9b6f65715ed0-7 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.minimax.MiniMaxChat.html |
9b6f65715ed0-8 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[BaseMessageChunk]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.minimax.MiniMaxChat.html |
9b6f65715ed0-9 | Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Any¶
Get the input type for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.minimax.MiniMaxChat.html |
9b6f65715ed0-10 | List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶ | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.minimax.MiniMaxChat.html |
da1fcdda4829-0 | langchain.chat_models.openai.ChatOpenAI¶
class langchain.chat_models.openai.ChatOpenAI[source]¶
Bases: BaseChatModel
OpenAI Chat large language models API.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
Whether to cache the response.
param callback_manager: Optional[BaseCallbackManager] = None¶
Callback manager to add to the run trace.
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: Optional[int] = None¶
Maximum number of tokens to generate.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-3.5-turbo' (alias 'model')¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
Base URL path for API requests, | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-1 | Base URL path for API requests,
leave blank if not using a proxy or service emulator.
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-2 | Call self as a function.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of abatch, which calls ainvoke N times.
Subclasses should override this method if they can batch more efficiently.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-3 | callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
Default implementation of ainvoke, which calls invoke in a thread pool.
Subclasses should override this method if they can run asynchronously.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-4 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[BaseMessageChunk]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-5 | input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of batch, which calls invoke N times.
Subclasses should override this method if they can batch more efficiently.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
completion_with_retry(run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-6 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-7 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int[source]¶
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int][source]¶
Get the tokens present in the text with tiktoken package.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
classmethod is_lc_serializable() → bool[source]¶
Return whether this model can be serialized by Langchain. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-8 | Return whether this model can be serialized by Langchain.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-9 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[BaseMessageChunk]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-10 | Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Any¶
Get the input type for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-11 | List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
Examples using ChatOpenAI¶
Facebook Messenger
Slack
WhatsApp
iMessage
Telegram
Discord
RePhraseQueryRetriever
Wikipedia
Arxiv
ChatGPT Plugins
Human as a tool
Yahoo Finance News
ArXiv
Metaphor Search
Shell (bash)
Xata chat memory
Dynamodb Chat Message History
OpenAI
LLMonitor
Context
Label Studio
PromptLayer
CnosDB
Log10
Flyte
Arthur
CSV
Document Comparison
Python
PowerBI Dataset
SQL Database
Airbyte Question Answering
Github
Spark SQL
AINetwork
Pandas Dataframe
Neo4j Vector Index
OpenAI Functions Metadata Tagger
Loading documents from a YouTube url
Figma
Fallbacks
Debugging
LangSmith Walkthrough
Reversible data anonymization with Microsoft Presidio
Data anonymization with Microsoft Presidio
Comparing Chain Outputs
Agent Trajectory
Custom Trajectory Evaluator
Set env var OPENAI_API_KEY or load from a .env file:
Set env var OPENAI_API_KEY or load from a .env file
Question Answering
Perform context-aware text splitting
Conversational Retrieval Agent
Multiple Retrieval Sources
Cite sources
Retrieve as you generate with FLARE
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
Structure answers with OpenAI functions | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
da1fcdda4829-12 | Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
Neptune Open Cypher QA Chain
NebulaGraphQAChain
Memgraph QA chain
KuzuQAChain
HugeGraph QA Chain
GraphSparqlQAChain
Diffbot Graph Transformer
ArangoDB QA chain
Neo4j DB QA chain
FalkorDBQAChain
Agents
AutoGPT
!pip install bs4
Wikibase Agent
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
CAMEL Role-Playing Autonomous Cooperative Agents
Multi-Agent Simulated Environment: Petting Zoo
Multi-agent decentralized speaker selection
Multi-agent authoritarian speaker selection
Generative Agents in LangChain
Two-Player Dungeons & Dragons
Multi-Player Dungeons & Dragons
Simulated Environment: Gymnasium
Agent Debates with Tools
How to use a SmartLLMChain
Vector SQL Retriever with MyScale
Elasticsearch
SQL
MultiVector Retriever
MultiQueryRetriever
WebResearchRetriever
Memory in LLMChain
Custom callback handlers
Async callbacks
Defining Custom Tools
Tools as OpenAI Functions
OpenAI Multi Functions Agent
Handle parsing errors
Running Agent as an Iterator
Add Memory to OpenAI Functions Agent
Custom functions with OpenAI Functions Agent
Use ToolKits with OpenAI Functions
Retry parser
Pydantic (JSON) parser
Prompt pipelining
Connecting to a Feature Store
Custom chain
Using OpenAI functions
interface.md
First we add a step to load memory
sql_db.md
prompt_llm_parser.md
Adding memory
multiple_chains.md
Code writing
Using tools
Configure Runnable traces | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
8284294a02fd-0 | langchain.chat_models.google_palm.ChatGooglePalmError¶
class langchain.chat_models.google_palm.ChatGooglePalmError[source]¶
Error with the Google PaLM API. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalmError.html |
4a054991c90a-0 | langchain.chat_models.mlflow_ai_gateway.ChatParams¶
class langchain.chat_models.mlflow_ai_gateway.ChatParams[source]¶
Bases: BaseModel
Parameters for the MLflow AI Gateway LLM.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param candidate_count: int = 1¶
The number of candidates to return.
param max_tokens: Optional[int] = None¶
param stop: Optional[List[str]] = None¶
param temperature: float = 0.0¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.mlflow_ai_gateway.ChatParams.html |
4a054991c90a-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.mlflow_ai_gateway.ChatParams.html |
4a054991c90a-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.mlflow_ai_gateway.ChatParams.html |
c9ddddd24f8b-0 | langchain.chat_models.anyscale.ChatAnyscale¶
class langchain.chat_models.anyscale.ChatAnyscale[source]¶
Bases: ChatOpenAI
Anyscale Chat large language models.
To use, you should have the openai python package installed, and the
environment variable ANYSCALE_API_KEY set with your API key.
Alternatively, you can use the anyscale_api_key keyword argument.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatAnyscale
chat = ChatAnyscale(model_name="meta-llama/Llama-2-7b-chat-hf")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param anyscale_api_base: str = 'https://api.endpoints.anyscale.com/v1'¶
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
param anyscale_api_key: Optional[str] = None¶
AnyScale Endpoints API keys.
param anyscale_proxy: Optional[str] = None¶
To support explicit proxy for Anyscale.
param available_models: Optional[Set[str]] = None¶
Available models from Anyscale API.
param cache: Optional[bool] = None¶
Whether to cache the response.
param callback_manager: Optional[BaseCallbackManager] = None¶
Callback manager to add to the run trace.
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: Optional[int] = None¶
Maximum number of tokens to generate.
param metadata: Optional[Dict[str, Any]] = None¶ | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-1 | param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'meta-llama/Llama-2-7b-chat-hf' (alias 'model')¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-2 | when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of abatch, which calls ainvoke N times.
Subclasses should override this method if they can batch more efficiently.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value, | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-3 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
Default implementation of ainvoke, which calls invoke in a thread pool.
Subclasses should override this method if they can run asynchronously.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-4 | to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[BaseMessageChunk]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-5 | jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of batch, which calls invoke N times.
Subclasses should override this method if they can batch more efficiently.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
completion_with_retry(run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) → Any¶
Use tenacity to retry the completion call.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-6 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls, | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-7 | API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
static get_available_models(anyscale_api_key: Optional[str] = None, anyscale_api_base: str = 'https://api.endpoints.anyscale.com/v1') → Set[str][source]¶
Get available models from Anyscale API.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: list[langchain.schema.messages.BaseMessage]) → int[source]¶ | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-8 | Calculate num tokens with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int]¶
Get the tokens present in the text with tiktoken package.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
classmethod is_lc_serializable() → bool¶
Return whether this model can be serialized by Langchain.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-9 | by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-10 | to the model provider API call.
Returns
Top model prediction as a message.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[BaseMessageChunk]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
c9ddddd24f8b-11 | Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Any¶
Get the input type for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
Examples using ChatAnyscale¶
Anyscale | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anyscale.ChatAnyscale.html |
b68d3b5dbbed-0 | langchain.chat_models.human.HumanInputChatModel¶
class langchain.chat_models.human.HumanInputChatModel[source]¶
Bases: BaseChatModel
ChatModel which returns user input as the response.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
Whether to cache the response.
param callback_manager: Optional[BaseCallbackManager] = None¶
Callback manager to add to the run trace.
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param input_func: Callable [Optional]¶
param input_kwargs: Mapping[str, Any] = {}¶
param message_func: Callable [Optional]¶
param message_kwargs: Mapping[str, Any] = {}¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param separator: str = '\n'¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of abatch, which calls ainvoke N times.
Subclasses should override this method if they can batch more efficiently. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-1 | Subclasses should override this method if they can batch more efficiently.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-2 | async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
Default implementation of ainvoke, which calls invoke in a thread pool.
Subclasses should override this method if they can run asynchronously.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[BaseMessageChunk]¶ | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-3 | Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation of batch, which calls invoke N times.
Subclasses should override this method if they can batch more efficiently.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-4 | Bind arguments to a Runnable, returning a new Runnable.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → LLMResult¶
Top Level call | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-5 | Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-6 | Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-7 | The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-8 | stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[BaseMessageChunk]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable. | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
b68d3b5dbbed-9 | Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Any¶
Get the input type for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
Examples using HumanInputChatModel¶
Human input chat model | https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.human.HumanInputChatModel.html |
2819661d7b3c-0 | langchain.vectorstores.redis.filters.RedisText¶
class langchain.vectorstores.redis.filters.RedisText(field: str)[source]¶
A RedisText is a RedisFilterField representing a text field in a Redis index.
Attributes
OPERATORS
OPERATOR_MAP
escaper
Methods
__init__(field)
equals(other)
__init__(field: str)¶
equals(other: RedisFilterField) → bool¶
Examples using RedisText¶
Redis | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.filters.RedisText.html |
57051c2c6793-0 | langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings¶
class langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings(endpoint: str, instance_id: str, username: str, password: str, datasource_name: str, embedding_index_name: str, field_name_mapping: Dict[str, str])[source]¶
Alibaba Cloud Opensearch client configuration.
Attribute:
endpoint (str)The endpoint of opensearch instance, You can find itfrom the console of Alibaba Cloud OpenSearch.
instance_id (str)The identify of opensearch instance, You can findit from the console of Alibaba Cloud OpenSearch.
datasource_name (str): The name of the data source specified when creating it.
username (str) : The username specified when purchasing the instance.
password (str) : The password specified when purchasing the instance.
embedding_index_name (str) : The name of the vector attribute specified
when configuring the instance attributes.
field_name_mapping (Dict)Using field name mapping between opensearchvector store and opensearch instance configuration table field names:
{‘id’: ‘The id field name map of index document.’,
‘document’: ‘The text field name map of index document.’,
‘embedding’: ‘In the embedding field of the opensearch instance,
the values must be in float16 multivalue type and separated by commas.’,
‘metadata_field_x’: ‘Metadata field mapping includes the mapped
field name and operator in the mapping value, separated by a comma
between the mapped field name and the operator.’,
}
Attributes
field_name_mapping
endpoint
instance_id
username
password
datasource_name
embedding_index_name
Methods
__init__(endpoint, instance_id, username, ...) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings.html |
57051c2c6793-1 | Methods
__init__(endpoint, instance_id, username, ...)
__init__(endpoint: str, instance_id: str, username: str, password: str, datasource_name: str, embedding_index_name: str, field_name_mapping: Dict[str, str]) → None[source]¶
Examples using AlibabaCloudOpenSearchSettings¶
Alibaba Cloud OpenSearch | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings.html |
3a980d96fe69-0 | langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch¶
class langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: Embeddings, **kwargs: Any)[source]¶
Amazon OpenSearch Vector Engine vector store.
Example
from langchain.vectorstores import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
Initialize with necessary components.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(opensearch_url, index_name, ...)
Initialize with necessary components.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_embeddings(text_embeddings[, metadatas, ...])
Add the given texts and embeddings to the vectorstore.
add_texts(texts[, metadatas, ids, bulk_size])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-1 | Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_embeddings(embeddings, texts, embedding)
Construct OpenSearchVectorSearch wrapper from pre-vectorized embeddings.
from_texts(texts, embedding[, metadatas, ...])
Construct OpenSearchVectorSearch wrapper from raw texts.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Return docs and it's scores most similar to query.
__init__(opensearch_url: str, index_name: str, embedding_function: Embeddings, **kwargs: Any)[source]¶
Initialize with necessary components. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-2 | Initialize with necessary components.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, bulk_size: int = 500, **kwargs: Any) → List[str][source]¶
Add the given texts and embeddings to the vectorstore.
Parameters
text_embeddings – Iterable pairs of string and embedding to
add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
bulk_size – Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-3 | to “text”.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, bulk_size: int = 500, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
bulk_size – Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-4 | Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-5 | )
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-6 | Return VectorStore initialized from documents and embeddings.
classmethod from_embeddings(embeddings: List[List[float]], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, ids: Optional[List[str]] = None, **kwargs: Any) → OpenSearchVectorSearch[source]¶
Construct OpenSearchVectorSearch wrapper from pre-vectorized embeddings.
Example
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedder = OpenAIEmbeddings()
embeddings = embedder.embed_documents(["foo", "bar"])
opensearch_vector_search = OpenSearchVectorSearch.from_embeddings(
embeddings,
texts,
embedder,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “lucene”; default: “nmslib”
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2”
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512 | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-7 | Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, ids: Optional[List[str]] = None, **kwargs: Any) → OpenSearchVectorSearch[source]¶
Construct OpenSearchVectorSearch wrapper from raw texts.
Example
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “lucene”; default: “nmslib”
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2”
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512 | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-8 | lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → list[langchain.schema.document.Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-9 | k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
metadata_field: Document field that metadata is stored in. Defaults to
“metadata”.
Can be set to a special value “*” to include the entire document.
Optional Args for Approximate Search:search_type: “approximate_search”; default: “approximate_search”
boolean_filter: A Boolean filter is a post filter consists of a Boolean
query that contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: “must”
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-10 | k-NN search with pre-filtering or an approximate search with modified
post-filtering. (deprecated, use efficient_filter)
efficient_filter: the Lucene Engine or Faiss Engine decides whether to
perform an exact k-NN search with pre-filtering or an approximate search
with modified post-filtering.
Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search”
space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”,
“hammingbit”; default: “l2”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search”
space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
3a980d96fe69-11 | query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs and it’s scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents along with its scores most similar to the query.
Optional Args:same as similarity_search
Examples using OpenSearchVectorSearch¶
OpenSearch | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html |
7bd9c0fae5f6-0 | langchain.vectorstores.neo4j_vector.SearchType¶
class langchain.vectorstores.neo4j_vector.SearchType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the Distance strategies.
VECTOR = 'vector'¶
HYBRID = 'hybrid'¶ | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.neo4j_vector.SearchType.html |
7f0547e3c5bc-0 | langchain.vectorstores.redis.schema.read_schema¶
langchain.vectorstores.redis.schema.read_schema(index_schema: Optional[Union[Dict[str, str], str, PathLike]]) → Dict[str, Any][source]¶ | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.redis.schema.read_schema.html |
4fb7688e5c72-0 | langchain.vectorstores.azuresearch.AzureSearch¶
class langchain.vectorstores.azuresearch.AzureSearch(azure_search_endpoint: str, azure_search_key: str, index_name: str, embedding_function: Callable, search_type: str = 'hybrid', semantic_configuration_name: Optional[str] = None, semantic_query_language: str = 'en-us', fields: Optional[List[SearchField]] = None, vector_search: Optional[VectorSearch] = None, semantic_settings: Optional[SemanticSettings] = None, scoring_profiles: Optional[List[ScoringProfile]] = None, default_scoring_profile: Optional[str] = None, **kwargs: Any)[source]¶
Azure Cognitive Search vector store.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(azure_search_endpoint, ...[, ...])
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Add texts data to an existing index.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-1 | asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
hybrid_search(query[, k])
Returns the most similar indexed documents to the query text.
hybrid_search_with_score(query[, k, filters])
Return docs most similar to query with an hybrid query.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
semantic_hybrid_search(query[, k])
Returns the most similar indexed documents to the query text.
semantic_hybrid_search_with_score(query[, ...])
Return docs most similar to query with an hybrid query.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-2 | similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
vector_search(query[, k])
Returns the most similar indexed documents to the query text.
vector_search_with_score(query[, k, filters])
Return docs most similar to query.
__init__(azure_search_endpoint: str, azure_search_key: str, index_name: str, embedding_function: Callable, search_type: str = 'hybrid', semantic_configuration_name: Optional[str] = None, semantic_query_language: str = 'en-us', fields: Optional[List[SearchField]] = None, vector_search: Optional[VectorSearch] = None, semantic_settings: Optional[SemanticSettings] = None, scoring_profiles: Optional[List[ScoringProfile]] = None, default_scoring_profile: Optional[str] = None, **kwargs: Any)[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str] | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-3 | Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Add texts data to an existing index.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-4 | score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-5 | Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, azure_search_endpoint: str = '', azure_search_key: str = '', index_name: str = 'langchain-index', **kwargs: Any) → AzureSearch[source]¶
Return VectorStore initialized from texts and embeddings.
hybrid_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document] | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-6 | Return type
List[Document]
hybrid_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query with an hybrid query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-7 | fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
semantic_hybrid_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
semantic_hybrid_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query with an hybrid query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-8 | Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶
Run similarity search with distance.
vector_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
vector_search_with_score(query: str, k: int = 4, filters: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
4fb7688e5c72-9 | Returns
List of Documents most similar to the query and score for each
Examples using AzureSearch¶
Azure Cognitive Search | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.azuresearch.AzureSearch.html |
ae4e0ac44948-0 | langchain.vectorstores.neo4j_vector.check_if_not_null¶
langchain.vectorstores.neo4j_vector.check_if_not_null(props: List[str], values: List[Any]) → None[source]¶ | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.neo4j_vector.check_if_not_null.html |
bd1aba7ca8bc-0 | langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch¶
class langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(embedding: Embeddings, config: AlibabaCloudOpenSearchSettings, **kwargs: Any)[source]¶
Alibaba Cloud OpenSearch vector store.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(embedding, config, **kwargs)
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
create_results(json_result) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-1 | Return docs most similar to query.
create_results(json_result)
create_results_with_score(json_result)
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding[, ids, ...])
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, config])
Return VectorStore initialized from texts and embeddings.
inner_embedding_query(embedding[, ...])
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, search_filter])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
__init__(embedding: Embeddings, config: AlibabaCloudOpenSearchSettings, **kwargs: Any) → None[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶ | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-2 | Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-3 | Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-4 | )
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
create_results(json_result: Dict[str, Any]) → List[Document][source]¶
create_results_with_score(json_result: Dict[str, Any]) → List[Tuple[Document, float]][source]¶
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool] | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-5 | False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, ids: Optional[List[str]] = None, config: Optional[AlibabaCloudOpenSearchSettings] = None, **kwargs: Any) → AlibabaCloudOpenSearch[source]¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, config: Optional[AlibabaCloudOpenSearchSettings] = None, **kwargs: Any) → AlibabaCloudOpenSearch[source]¶
Return VectorStore initialized from texts and embeddings.
inner_embedding_query(embedding: List[float], search_filter: Optional[Dict[str, Any]] = None, k: int = 4) → Dict[str, Any][source]¶
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-6 | Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, search_filter: Optional[Dict[str, Any]] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
bd1aba7ca8bc-7 | Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, search_filter: Optional[dict] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶
Run similarity search with distance.
Examples using AlibabaCloudOpenSearch¶
Alibaba Cloud OpenSearch | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html |
44c4aac9afd4-0 | langchain.vectorstores.elastic_vector_search.ElasticKnnSearch¶
class langchain.vectorstores.elastic_vector_search.ElasticKnnSearch(index_name: str, embedding: Embeddings, es_connection: Optional['Elasticsearch'] = None, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, vector_query_field: Optional[str] = 'vector', query_field: Optional[str] = 'text')[source]¶
[Deprecated] [DEPRECATED] Elasticsearch with k-nearest neighbor search
(k-NN) vector store.
Recommended to use ElasticsearchStore instead, which supports
metadata filtering, customising the query retriever and much more!
You can read more on ElasticsearchStore:
https://python.langchain.com/docs/integrations/vectorstores/elasticsearch
It creates an Elasticsearch index of text data that
can be searched using k-NN search. The text data is transformed into
vector embeddings using a provided embedding model, and these embeddings
are stored in the Elasticsearch index.
index_name¶
The name of the Elasticsearch index.
Type
str
embedding¶
The embedding model to use for transforming text data
into vector embeddings.
Type
Embeddings
es_connection¶
An existing Elasticsearch connection.
Type
Elasticsearch, optional
es_cloud_id¶
The Cloud ID of your Elasticsearch Service
deployment.
Type
str, optional
es_user¶
The username for your Elasticsearch Service deployment.
Type
str, optional
es_password¶
The password for your Elasticsearch Service
deployment.
Type
str, optional
vector_query_field¶
The name of the field in the Elasticsearch
index that contains the vector embeddings.
Type
str, optional
query_field¶
The name of the field in the Elasticsearch index
that contains the original text data.
Type
str, optional | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-1 | that contains the original text data.
Type
str, optional
Usage:>>> from embeddings import Embeddings
>>> embedding = Embeddings.load('glove')
>>> es_search = ElasticKnnSearch('my_index', embedding)
>>> es_search.add_texts(['Hello world!', 'Another text'])
>>> results = es_search.knn_search('Hello')
[(Document(page_content='Hello world!', metadata={}), 0.9)][*Deprecated*] [DEPRECATED] `Elasticsearch` with k-nearest neighbor search
(k-NN) vector store.
Recommended to use ElasticsearchStore instead, which supports
metadata filtering, customising the query retriever and much more!
You can read more on ElasticsearchStore:
https://python.langchain.com/docs/integrations/vectorstores/elasticsearch
It creates an Elasticsearch index of text data that
can be searched using k-NN search. The text data is transformed into
vector embeddings using a provided embedding model, and these embeddings
are stored in the Elasticsearch index.
index_name¶
The name of the Elasticsearch index.
Type
str
embedding¶
The embedding model to use for transforming text data
into vector embeddings.
Type
Embeddings
es_connection¶
An existing Elasticsearch connection.
Type
Elasticsearch, optional
es_cloud_id¶
The Cloud ID of your Elasticsearch Service
deployment.
Type
str, optional
es_user¶
The username for your Elasticsearch Service deployment.
Type
str, optional
es_password¶
The password for your Elasticsearch Service
deployment.
Type
str, optional
vector_query_field¶
The name of the field in the Elasticsearch
index that contains the vector embeddings.
Type
str, optional
query_field¶
The name of the field in the Elasticsearch index
that contains the original text data.
Type
str, optional
Usage:>>> from embeddings import Embeddings | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-2 | Type
str, optional
Usage:>>> from embeddings import Embeddings
>>> embedding = Embeddings.load('glove')
>>> es_search = ElasticKnnSearch('my_index', embedding)
>>> es_search.add_texts(['Hello world!', 'Another text'])
>>> results = es_search.knn_search('Hello')
[(Document(page_content='Hello world!', metadata={}), 0.9)]
Notes
Deprecated since version 0.0.265: Use ElasticsearchStore class. instead.
Attributes
embeddings
Access the query embedding object if available.
Methods
__init__(index_name, embedding[, ...])
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, model_id, ...])
Add a list of texts to the Elasticsearch index.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-3 | asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
create_knn_index(mapping)
Create a new k-NN index in Elasticsearch.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas])
Create a new ElasticKnnSearch instance and add a list of texts to the
knn_hybrid_search([query, k, query_vector, ...])
Perform a hybrid k-NN and text search on the Elasticsearch index.
knn_search([query, k, query_vector, ...])
Perform a k-NN search on the Elasticsearch index.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter])
Pass through to knn_search
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Pass through to knn_search including score | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-4 | similarity_search_with_score(query[, k])
Pass through to knn_search including score
__init__(index_name: str, embedding: Embeddings, es_connection: Optional['Elasticsearch'] = None, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, vector_query_field: Optional[str] = 'vector', query_field: Optional[str] = 'text')[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[Dict[Any, Any]]] = None, model_id: Optional[str] = None, refresh_indices: bool = False, **kwargs: Any) → List[str][source]¶
Add a list of texts to the Elasticsearch index.
Parameters
texts (Iterable[str]) – The texts to add to the index.
metadatas (List[Dict[Any, Any]], optional) – A list of metadata dictionaries
to associate with the texts. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-5 | to associate with the texts.
model_id (str, optional) – The ID of the model to use for transforming the
texts into vectors.
refresh_indices (bool, optional) – Whether to refresh the Elasticsearch
indices after adding the texts.
**kwargs – Arbitrary keyword arguments.
Returns
A list of IDs for the added texts.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-6 | k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
)
# Only retrieve documents that have a relevance score
# Above a certain threshold
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
# Only get the single most similar document from the dataset
docsearch.as_retriever(search_kwargs={'k': 1})
# Use a filter to only retrieve documents from a specific paper
docsearch.as_retriever(
search_kwargs={'filter': {'paper_title':'GPT-4 Technical Report'}}
)
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶ | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-7 | Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
create_knn_index(mapping: Dict) → None[source]¶
Create a new k-NN index in Elasticsearch.
Parameters
mapping (Dict) – The mapping to use for the new index.
Returns
None
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) → ElasticKnnSearch[source]¶
Create a new ElasticKnnSearch instance and add a list of texts to theElasticsearch index.
Parameters
texts (List[str]) – The texts to add to the index.
embedding (Embeddings) – The embedding model to use for transforming the
texts into vectors.
metadatas (List[Dict[Any, Any]], optional) – A list of metadata dictionaries
to associate with the texts.
**kwargs – Arbitrary keyword arguments.
Returns | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-8 | to associate with the texts.
**kwargs – Arbitrary keyword arguments.
Returns
A new ElasticKnnSearch instance.
knn_hybrid_search(query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, knn_boost: Optional[float] = 0.9, query_boost: Optional[float] = 0.1, fields: Optional[Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...]]] = None, page_content: Optional[str] = 'text') → List[Tuple[Document, float]][source]¶
Perform a hybrid k-NN and text search on the Elasticsearch index.
Parameters
query (str, optional) – The query text to search for.
k (int, optional) – The number of nearest neighbors to return.
query_vector (List[float], optional) – The query vector to search for.
model_id (str, optional) – The ID of the model to use for transforming the
query text into a vector.
size (int, optional) – The number of search results to return.
source (bool, optional) – Whether to return the source of the search results.
knn_boost (float, optional) – The boost value to apply to the k-NN search
results.
query_boost (float, optional) – The boost value to apply to the text search
results.
fields (List[Mapping[str, Any]], optional) – The fields to return in the
search results.
page_content (str, optional) – The name of the field that contains the page
content.
Returns
A list of tuples, where each tuple contains a Document object and a score. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-9 | Returns
A list of tuples, where each tuple contains a Document object and a score.
knn_search(query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, fields: Optional[Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...]]] = None, page_content: Optional[str] = 'text') → List[Tuple[Document, float]][source]¶
Perform a k-NN search on the Elasticsearch index.
Parameters
query (str, optional) – The query text to search for.
k (int, optional) – The number of nearest neighbors to return.
query_vector (List[float], optional) – The query vector to search for.
model_id (str, optional) – The ID of the model to use for transforming the
query text into a vector.
size (int, optional) – The number of search results to return.
source (bool, optional) – Whether to return the source of the search results.
fields (List[Mapping[str, Any]], optional) – The fields to return in the
search results.
page_content (str, optional) – The name of the field that contains the page
content.
Returns
A list of tuples, where each tuple contains a Document object and a score.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-10 | among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Pass through to knn_search
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶ | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
44c4aac9afd4-11 | Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 10, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Pass through to knn_search including score | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html |
ee0e31c7667f-0 | langchain.vectorstores.timescalevector.TimescaleVector¶
class langchain.vectorstores.timescalevector.TimescaleVector(service_url: str, embedding: Embeddings, collection_name: str = 'langchain_store', num_dimensions: int = 1536, distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, pre_delete_collection: bool = False, logger: Optional[Logger] = None, relevance_score_fn: Optional[Callable[[float], float]] = None, time_partition_interval: Optional[timedelta] = None)[source]¶
VectorStore implementation using the timescale vector client to store vectors
in Postgres.
To use, you should have the timescale_vector python package installed.
Parameters
service_url – Service url on timescale cloud.
embedding – Any embedding function implementing
langchain.embeddings.base.Embeddings interface.
collection_name – The name of the collection to use. (default: langchain_store)
This will become the table name used for the collection.
distance_strategy – The distance strategy to use. (default: COSINE)
pre_delete_collection – If True, will delete the collection if it exists.
(default: False). Useful for testing.
Example
from langchain.vectorstores import TimescaleVector
from langchain.embeddings.openai import OpenAIEmbeddings
SERVICE_URL = "postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require"
COLLECTION_NAME = "state_of_the_union_test"
embeddings = OpenAIEmbeddings()
vectorestore = TimescaleVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name=COLLECTION_NAME,
service_url=SERVICE_URL,
)
Attributes
DEFAULT_INDEX_TYPE
embeddings
Access the query embedding object if available.
Methods | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
ee0e31c7667f-1 | Attributes
DEFAULT_INDEX_TYPE
embeddings
Access the query embedding object if available.
Methods
__init__(service_url, embedding[, ...])
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_embeddings(texts, embeddings[, ...])
Add embeddings to the vectorstore.
aadd_texts(texts[, metadatas, ids])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_embeddings(texts, embeddings[, ...])
Add embeddings to the vectorstore.
add_texts(texts[, metadatas, ids])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_embeddings(text_embeddings, embedding)
Construct TimescaleVector wrapper from raw documents and pre- generated embeddings.
afrom_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
Return VectorStoreRetriever initialized from this VectorStore.
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k, filter, ...])
Run similarity search with TimescaleVector with distance.
asimilarity_search_by_vector(embedding[, k, ...]) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
ee0e31c7667f-2 | asimilarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
asimilarity_search_with_score(query[, k, ...])
Return docs most similar to query.
asimilarity_search_with_score_by_vector(...)
create_index([index_type])
date_to_range_filter(**kwargs)
delete([ids])
Delete by vector ID or other criteria.
delete_by_metadata(filter, **kwargs)
Delete by vector ID or other criteria.
drop_index()
drop_tables()
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_embeddings(text_embeddings, embedding)
Construct TimescaleVector wrapper from raw documents and pre- generated embeddings.
from_existing_index(embedding[, ...])
Get intsance of an existing TimescaleVector store.This method will return the instance of the store without inserting any new embeddings
from_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
get_service_url(kwargs)
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
service_url_from_db_params(host, port, ...)
Return connection string from database parameters.
similarity_search(query[, k, filter, predicates])
Run similarity search with TimescaleVector with distance.
similarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
ee0e31c7667f-3 | Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, ...])
Return docs most similar to query.
similarity_search_with_score_by_vector(embedding)
__init__(service_url: str, embedding: Embeddings, collection_name: str = 'langchain_store', num_dimensions: int = 1536, distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, pre_delete_collection: bool = False, logger: Optional[Logger] = None, relevance_score_fn: Optional[Callable[[float], float]] = None, time_partition_interval: Optional[timedelta] = None) → None[source]¶
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_embeddings(texts: Iterable[str], embeddings: List[List[float]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
Add embeddings to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
embeddings – List of list of embedding vectors.
metadatas – List of metadatas associated with the texts.
kwargs – vectorstore specific parameters
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
ee0e31c7667f-4 | Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_embeddings(texts: Iterable[str], embeddings: List[List[float]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
Add embeddings to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
embeddings – List of list of embedding vectors.
metadatas – List of metadatas associated with the texts.
kwargs – vectorstore specific parameters
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
ee0e31c7667f-5 | Return VectorStore initialized from documents and embeddings.
async classmethod afrom_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain_store', distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) → TimescaleVector[source]¶
Construct TimescaleVector wrapper from raw documents and pre-
generated embeddings.
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
“Either pass it as a parameter
or set the TIMESCALE_SERVICE_URL environment variable.
Example
from langchain.vectorstores import TimescaleVector
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
tvs = TimescaleVector.from_embeddings(text_embedding_pairs, embeddings)
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain_store', distance_strategy: DistanceStrategy = DistanceStrategy.COSINE, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) → TimescaleVector[source]¶
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
“Either pass it as a parameter
or set the TIMESCALE_SERVICE_URL environment variable.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance. | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
ee0e31c7667f-6 | Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
Return VectorStoreRetriever initialized from this VectorStore.
Parameters
search_type (Optional[str]) – Defines the type of search that
the Retriever should perform.
Can be “similarity” (default), “mmr”, or
“similarity_score_threshold”.
search_kwargs (Optional[Dict]) – Keyword arguments to pass to the
search function. Can include things like:
k: Amount of documents to return (Default: 4)
score_threshold: Minimum relevance threshold
for similarity_score_threshold
fetch_k: Amount of documents to pass to MMR algorithm (Default: 20)
lambda_mult: Diversity of results returned by MMR;
1 for minimum diversity and 0 for maximum. (Default: 0.5)
filter: Filter by document metadata
Returns
Retriever class for VectorStore.
Return type
VectorStoreRetriever
Examples:
# Retrieve more documents with higher diversity
# Useful if your dataset has many similar documents
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'lambda_mult': 0.25}
)
# Fetch more documents for the MMR algorithm to consider
# But only return the top 5
docsearch.as_retriever(
search_type="mmr",
search_kwargs={'k': 5, 'fetch_k': 50}
) | https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.timescalevector.TimescaleVector.html |
Subsets and Splits