id
stringlengths
14
15
text
stringlengths
44
2.47k
source
stringlengths
61
181
a8a1b6f9f508-0
langchain.smith.evaluation.string_run_evaluator.StringRunMapper¶ class langchain.smith.evaluation.string_run_evaluator.StringRunMapper[source]¶ Bases: Serializable Extract items to evaluate from the run object. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. __call__(run: Run) → Dict[str, str][source]¶ Maps the Run to a dictionary. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html
a8a1b6f9f508-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html
a8a1b6f9f508-2
The unique identifier is a list of strings that describes the path to the object. abstract map(run: Run) → Dict[str, str][source]¶ Maps the Run to a dictionary. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶ The keys to extract from the run.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html
e3757b13199a-0
langchain.smith.evaluation.name_generation.random_name¶ langchain.smith.evaluation.name_generation.random_name(prefix: str = 'test') → str[source]¶ Generate a random name.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.name_generation.random_name.html
dacf847261fe-0
langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper¶ class langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper[source]¶ Bases: StringRunMapper Extract items to evaluate from the run object. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. __call__(run: Run) → Dict[str, str]¶ Maps the Run to a dictionary. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html
dacf847261fe-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html
dacf847261fe-2
The unique identifier is a list of strings that describes the path to the object. map(run: Run) → Dict[str, str][source]¶ Maps the Run to a dictionary. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ serialize_chat_messages(messages: List[Dict]) → str[source]¶ Extract the input messages from the run. serialize_inputs(inputs: Dict) → str[source]¶ serialize_outputs(outputs: Dict) → str[source]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html
dacf847261fe-3
property output_keys: List[str]¶ The keys to extract from the run.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html
67fc436b7cbe-0
langchain.smith.evaluation.progress.ProgressBarCallback¶ class langchain.smith.evaluation.progress.ProgressBarCallback(total: int, ncols: int = 50, **kwargs: Any)[source]¶ A simple progress bar for the console. Initialize the progress bar. Parameters total – int, the total number of items to be processed. ncols – int, the character width of the progress bar. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. ignore_retry Whether to ignore retry callbacks. raise_error run_inline Methods __init__(total[, ncols]) Initialize the progress bar. increment() Increment the counter and update the progress bar. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id[, parent_run_id]) Run when chain ends running. on_chain_error(error, *, run_id[, parent_run_id]) Run when chain errors. on_chain_start(serialized, inputs, *, run_id) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id[, parent_run_id]) Run when LLM ends running. on_llm_error(error, *, run_id[, parent_run_id]) Run when LLM errors. on_llm_new_token(token, *[, chunk, ...])
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html
67fc436b7cbe-1
on_llm_new_token(token, *[, chunk, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_retry(retry_state, *, run_id[, parent_run_id]) Run on a retry event. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id[, parent_run_id]) Run when tool ends running. on_tool_error(error, *, run_id[, parent_run_id]) Run when tool errors. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. __init__(total: int, ncols: int = 50, **kwargs: Any)[source]¶ Initialize the progress bar. Parameters total – int, the total number of items to be processed. ncols – int, the character width of the progress bar. increment() → None[source]¶ Increment the counter and update the progress bar. on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html
67fc436b7cbe-2
Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶ Run when chain ends running. on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶ Run when LLM ends running. on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when LLM errors. on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on new LLM token. Only available when streaming is enabled. Parameters
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html
67fc436b7cbe-3
Run on new LLM token. Only available when streaming is enabled. Parameters token (str) – The new token. chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk, information. (containing content and other) – on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶ Run when Retriever ends running. on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on a retry event. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html
67fc436b7cbe-4
Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶ Run when tool ends running. on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when tool starts running.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html
0c29e8e79521-0
langchain.smith.evaluation.runner_utils.arun_on_dataset¶ async langchain.smith.evaluation.runner_utils.arun_on_dataset(client: Optional[Client], dataset_name: str, llm_or_chain_factory: Union[Callable[[], Union[Chain, Runnable]], BaseLanguageModel, Callable[[dict], Any], Runnable, Chain], *, evaluation: Optional[RunEvalConfig] = None, concurrency_level: int = 5, project_name: Optional[str] = None, project_metadata: Optional[Dict[str, Any]] = None, verbose: bool = False, tags: Optional[List[str]] = None, **kwargs: Any) → Dict[str, Any][source]¶ Run the Chain or language model on a dataset and store traces to the specified project name. Parameters dataset_name – Name of the dataset to run the chain on. llm_or_chain_factory – Language model or Chain constructor to run over the dataset. The Chain constructor is used to permit independent calls on each example without carrying over state. evaluation – Configuration for evaluators to run on the results of the chain concurrency_level – The number of async tasks to run concurrently. project_name – Name of the project to store the traces in. Defaults to {dataset_name}-{chain class name}-{datetime}. project_metadata – Optional metadata to add to the project. Useful for storing information the test variant. (prompt version, model version, etc.) client – LangSmith client to use to access the dataset and to log feedback and run traces. verbose – Whether to print progress. tags – Tags to add to each run in the project. Returns A dictionary containing the run’s project name and the resulting model outputs. For the (usually faster) async version of this function, see arun_on_dataset(). Examples from langsmith import Client from langchain.chat_models import ChatOpenAI
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html
0c29e8e79521-1
Examples from langsmith import Client from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain from langchain.smith import smith_eval.RunEvalConfig, run_on_dataset # Chains may have memory. Passing in a constructor function lets the # evaluation framework avoid cross-contamination between runs. def construct_chain(): llm = ChatOpenAI(temperature=0) chain = LLMChain.from_string( llm, "What's the answer to {your_input_key}" ) return chain # Load off-the-shelf evaluators via config or the EvaluatorType (string or enum) evaluation_config = smith_eval.RunEvalConfig( evaluators=[ "qa", # "Correctness" against a reference answer "embedding_distance", smith_eval.RunEvalConfig.Criteria("helpfulness"), smith_eval.RunEvalConfig.Criteria({ "fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?" }), ] ) client = Client() await arun_on_dataset( client, "<my_dataset_name>", construct_chain, evaluation=evaluation_config, ) You can also create custom evaluators by subclassing the StringEvaluator or LangSmith’s RunEvaluator classes. from typing import Optional from langchain.evaluation import StringEvaluator class MyStringEvaluator(StringEvaluator): @property def requires_input(self) -> bool: return False @property def requires_reference(self) -> bool: return True @property def evaluation_name(self) -> str: return "exact_match" def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html
0c29e8e79521-2
return {"score": prediction == reference} evaluation_config = smith_eval.RunEvalConfig( custom_evaluators = [MyStringEvaluator()], ) await arun_on_dataset( client, "<my_dataset_name>", construct_chain, evaluation=evaluation_config, ) Examples using arun_on_dataset¶ LangSmith Walkthrough
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html
ffe8bacf3500-0
langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper¶ class langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper[source]¶ Bases: StringRunMapper Extract items to evaluate from the run object from a chain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_key: Optional[str] = None¶ The key from the model Run’s inputs to use as the eval input. If not provided, will use the only input key or raise an error if there are multiple. param prediction_key: Optional[str] = None¶ The key from the model Run’s outputs to use as the eval prediction. If not provided, will use the only output key or raise an error if there are multiple. __call__(run: Run) → Dict[str, str]¶ Maps the Run to a dictionary. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html
ffe8bacf3500-1
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict().
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html
ffe8bacf3500-2
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map(run: Run) → Dict[str, str][source]¶ Maps the Run to a dictionary. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html
ffe8bacf3500-3
For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶ The keys to extract from the run.
https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html
39a9612af831-0
langchain_experimental.tot.prompts.JSONListOutputParser¶ class langchain_experimental.tot.prompts.JSONListOutputParser[source]¶ Bases: BaseOutputParser Class to parse the output of a PROPOSE_PROMPT response. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.config.RunnableConfig | None = None, **kwargs: Optional[Any]) → T¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. async aparse(text: str) → T¶ Parse a single string model output into some structure. Parameters text – String output of a language model. Returns Structured output. async aparse_result(result: List[Generation], *, partial: bool = False) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html
39a9612af831-1
Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html
39a9612af831-2
Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_orm(obj: Any) → Model¶ get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶ classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html
39a9612af831-3
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. parse(text: str) → List[str][source]¶ Parse the output of an LLM call. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ parse_result(result: List[Generation], *, partial: bool = False) → T¶ Parse a list of candidate model Generations into a specific format.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html
39a9612af831-4
Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of a language model. prompt – Input PromptValue. Returns Structured output classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html
39a9612af831-5
classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Any¶ property OutputType: type[T]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.JSONListOutputParser.html
e6e981951204-0
langchain_experimental.tot.thought_generation.SampleCoTStrategy¶ class langchain_experimental.tot.thought_generation.SampleCoTStrategy[source]¶ Bases: BaseThoughtGenerationStrategy Sample thoughts from a Chain-of-Thought (CoT) prompt. This strategy works better when the thought space is rich, such as when each thought is a paragraph. Independent and identically distributed samples lead to diversity, which helps to avoid repetition. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param c: int = 3¶ The number of children thoughts to propose at each step. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain,
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-1
This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['problem_description', 'thoughts'], template="You are an intelligent agent that is generating one thought at a time in\na tree of thoughts setting.\n\nPROBLEM \n\n{{problem_description}}\n\n{% if thoughts %}\nTHOUGHTS\n\n{% for thought in thoughts %}\n{{ thought }}\n{% endfor %}\n{% endif %}\n\nLet's think step by step.", template_format='jinja2')¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-2
will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-3
Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-4
tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny")
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-5
Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string:
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-6
Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-7
Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-8
Create outputs from response. dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-9
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. next_thought(problem_description: str, thoughts_path: Tuple[str, ...] = (), **kwargs: Any) → str[source]¶ Generate the next thought given the problem description and the thoughts generated so far. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-10
Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-11
*args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-12
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Type[langchain.schema.runnable.utils.Input]¶ property OutputType: Type[langchain.schema.runnable.utils.Output]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
e6e981951204-13
property OutputType: Type[langchain.schema.runnable.utils.Output]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.SampleCoTStrategy.html
92b0c632e35d-0
langchain_experimental.tot.base.ToTChain¶ class langchain_experimental.tot.base.ToTChain[source]¶ Bases: Chain A Chain implementing the Tree of Thought (ToT). Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param c: int = 3¶ The number of children to explore at each node param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param checker: ToTChecker [Required]¶ ToT Checker to use. param k: int = 10¶ The maximum number of conversation rounds param llm: BaseLanguageModel [Required]¶ Language model to use. It must be set to produce different variations for the same prompt. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-1
and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tot_controller: ToTController = <langchain_experimental.tot.controller.ToTController object>¶ param tot_memory: ToTDFSMemory = <langchain_experimental.tot.memory.ToTDFSMemory object>¶ param tot_strategy_class: Type[BaseThoughtGenerationStrategy] = <class 'langchain_experimental.tot.thought_generation.ProposePromptStrategy'>¶ param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. param verbose_llm: bool = False¶ __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-2
response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-3
Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-4
The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-5
Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-6
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_llm(llm: BaseLanguageModel, **kwargs: Any) → ToTChain[source]¶ Create a ToTChain from a language model. Parameters llm – The language model to use. kwargs – Additional arguments to pass to the ToTChain constructor. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-7
Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. log_thought(thought: Thought, level: int, run_manager: Optional[CallbackManagerForChainRun] = None) → None[source]¶ map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-8
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-9
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-10
Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Type[langchain.schema.runnable.utils.Input]¶ property OutputType: Type[langchain.schema.runnable.utils.Output]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
92b0c632e35d-11
property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.base.ToTChain.html
247c8e562fdc-0
langchain_experimental.tot.prompts.CheckerOutputParser¶ class langchain_experimental.tot.prompts.CheckerOutputParser[source]¶ Bases: BaseOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.config.RunnableConfig | None = None, **kwargs: Optional[Any]) → T¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. async aparse(text: str) → T¶ Parse a single string model output into some structure. Parameters text – String output of a language model. Returns Structured output. async aparse_result(result: List[Generation], *, partial: bool = False) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html
247c8e562fdc-1
Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html
247c8e562fdc-2
Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_orm(obj: Any) → Model¶ get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶ classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html
247c8e562fdc-3
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. parse(text: str) → ThoughtValidity[source]¶ Parse the output of the language model. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ parse_result(result: List[Generation], *, partial: bool = False) → T¶ Parse a list of candidate model Generations into a specific format.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html
247c8e562fdc-4
Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of a language model. prompt – Input PromptValue. Returns Structured output classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html
247c8e562fdc-5
classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Any¶ property OutputType: type[T]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.prompts.CheckerOutputParser.html
30eca0fa6e94-0
langchain_experimental.tot.checker.ToTChecker¶ class langchain_experimental.tot.checker.ToTChecker[source]¶ Bases: Chain, ABC Tree of Thought (ToT) checker. This is an abstract ToT checker that must be implemented by the user. You can implement a simple rule-based checker or a more sophisticated neural network based classifier. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-1
Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-2
metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-3
addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-4
addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-5
jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-6
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} abstract evaluate(problem_description: str, thoughts: Tuple[str, ...] = ()) → ThoughtValidity[source]¶ Evaluate the response to the problem description and return the solution type. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-7
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-8
Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-9
these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
30eca0fa6e94-10
Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Type[langchain.schema.runnable.utils.Input]¶ property OutputType: Type[langchain.schema.runnable.utils.Output]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.checker.ToTChecker.html
1d2e76748996-0
langchain_experimental.tot.memory.ToTDFSMemory¶ class langchain_experimental.tot.memory.ToTDFSMemory(stack: Optional[List[Thought]] = None)[source]¶ Memory for the Tree of Thought (ToT) chain. Implemented as a stack of thoughts. This allows for a depth first search (DFS) of the ToT. Attributes level Return the current level of the stack. Methods __init__([stack]) current_path() Return the thoughts path. pop([n]) Pop the top n elements of the stack and return the last one. store(node) Add a node on the top of the stack. top() Get the top of the stack without popping it. top_parent() Get the parent of the top of the stack without popping it. __init__(stack: Optional[List[Thought]] = None)[source]¶ current_path() → List[Thought][source]¶ Return the thoughts path. pop(n: int = 1) → Optional[Thought][source]¶ Pop the top n elements of the stack and return the last one. store(node: Thought) → None[source]¶ Add a node on the top of the stack. top() → Optional[Thought][source]¶ Get the top of the stack without popping it. top_parent() → Optional[Thought][source]¶ Get the parent of the top of the stack without popping it.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.memory.ToTDFSMemory.html
e0c973c07016-0
langchain_experimental.tot.thought.Thought¶ class langchain_experimental.tot.thought.Thought[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param children: Set[langchain_experimental.tot.thought.Thought] [Optional]¶ param text: str [Required]¶ param validity: langchain_experimental.tot.thought.ThoughtValidity [Required]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.Thought.html
e0c973c07016-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.Thought.html
e0c973c07016-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought.Thought.html
e7f447459e54-0
langchain_experimental.tot.thought_generation.ProposePromptStrategy¶ class langchain_experimental.tot.thought_generation.ProposePromptStrategy[source]¶ Bases: BaseThoughtGenerationStrategy Propose thoughts sequentially using a “propose prompt”. This strategy works better when the thought space is more constrained, such as when each thought is just a word or a line. Proposing different thoughts in the same prompt completion helps to avoid duplication. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param c: int = 3¶ The number of children thoughts to propose at each step. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain,
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-1
This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['problem_description', 'thoughts', 'n'], output_parser=JSONListOutputParser(), template='You are an intelligent agent that is generating thoughts in a tree of\nthoughts setting.\n\nThe output should be a markdown code snippet formatted as a JSON list of\nstrings, including the leading and trailing "```json" and "```":\n\n```json\n[\n    "<thought-1>",\n    "<thought-2>",\n    "<thought-3>"\n]\n```\n\nPROBLEM\n\n{{ problem_description }}\n\n{% if thoughts %}\nVALID THOUGHTS\n\n{% for thought in thoughts %}\n{{ thought }}\n{% endfor %}\n\nPossible next {{ n }} valid thoughts based on the last valid thought:\n{% else %}\n\nPossible next {{ n }} valid thoughts based on the PROBLEM:\n{%- endif -%}', template_format='jinja2')¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain,
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-2
These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tot_memory: Dict[Tuple[str, ...], List[str]] [Optional]¶ param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-3
metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-4
memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-5
Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-6
tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-7
Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-8
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-9
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. next_thought(problem_description: str, thoughts_path: Tuple[str, ...] = (), **kwargs: Any) → str[source]¶ Generate the next thought given the problem description and the thoughts generated so far. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-10
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-11
Prepare prompts from inputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-12
save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
e7f447459e54-13
Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Type[langchain.schema.runnable.utils.Input]¶ property OutputType: Type[langchain.schema.runnable.utils.Output]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.ProposePromptStrategy.html
be334539ad13-0
langchain_experimental.tot.controller.ToTController¶ class langchain_experimental.tot.controller.ToTController(c: int = 3)[source]¶ Tree of Thought (ToT) controller. This is a version of a ToT controller, dubbed in the paper as a “Simple Controller”. It has one parameter c which is the number of children to explore for each thought. Initialize the controller. Parameters c – The number of children to explore at each node. Methods __init__([c]) Initialize the controller. __init__(c: int = 3)[source]¶ Initialize the controller. Parameters c – The number of children to explore at each node.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.controller.ToTController.html
65c06c828b68-0
langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy¶ class langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy[source]¶ Bases: LLMChain Base class for a thought generation strategy. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param c: int = 3¶ The number of children thoughts to propose at each step. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-1
param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-2
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-3
Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-4
Generate LLM result from inputs. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-5
Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..."
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-6
# -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-7
Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...}
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-8
# -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-9
The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. abstract next_thought(problem_description: str, thoughts_path: Tuple[str, ...] = (), **kwargs: Any) → str[source]¶ Generate the next thought given the problem description and the thoughts generated so far. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-10
only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects.
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html
65c06c828b68-11
these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
https://api.python.langchain.com/en/latest/tot/langchain_experimental.tot.thought_generation.BaseThoughtGenerationStrategy.html