id
stringlengths 14
15
| text
stringlengths 44
2.47k
| source
stringlengths 61
181
|
---|---|---|
47ae88142a5c-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
47ae88142a5c-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
023f8d07936b-0 | langchain.document_loaders.facebook_chat.concatenate_rows¶
langchain.document_loaders.facebook_chat.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
Parameters
row – dictionary containing message information. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html |
75c49a3c4da1-0 | langchain.document_loaders.parsers.msword.MsWordParser¶
class langchain.document_loaders.parsers.msword.MsWordParser[source]¶
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.msword.MsWordParser.html |
af1b61260924-0 | langchain.document_loaders.word_document.UnstructuredWordDocumentLoader¶
class langchain.document_loaders.word_document.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsof Word file using Unstructured.
Works with both .docx and .doc files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader(“example.docx”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-docx
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html |
af1b61260924-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredWordDocumentLoader¶
Microsoft Word | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html |
067dc9c9cc2e-0 | langchain.document_loaders.url_playwright.PlaywrightEvaluator¶
class langchain.document_loaders.url_playwright.PlaywrightEvaluator[source]¶
Abstract base class for all evaluators.
Each evaluator should take a page, a browser instance, and a response
object, process the page as necessary, and return the resulting text.
Methods
__init__()
evaluate(page, browser, response)
Synchronously process the page and return the resulting text.
evaluate_async(page, browser, response)
Asynchronously process the page and return the resulting text.
__init__()¶
abstract evaluate(page: Page, browser: Browser, response: Response) → str[source]¶
Synchronously process the page and return the resulting text.
Parameters
page – The page to process.
browser – The browser instance.
response – The response from page.goto().
Returns
The text content of the page.
Return type
text
abstract async evaluate_async(page: AsyncPage, browser: AsyncBrowser, response: AsyncResponse) → str[source]¶
Asynchronously process the page and return the resulting text.
Parameters
page – The page to process.
browser – The browser instance.
response – The response from page.goto().
Returns
The text content of the page.
Return type
text | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightEvaluator.html |
a683b21b2aef-0 | langchain.document_loaders.weather.WeatherDataLoader¶
class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶
Load weather data with Open Weather Map API.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
Initialize with parameters.
Methods
__init__(client, places)
Initialize with parameters.
from_params(places, *[, openweathermap_api_key])
lazy_load()
Lazily load weather data for the given locations.
load()
Load weather data for the given locations.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: OpenWeatherMapAPIWrapper, places: Sequence[str]) → None[source]¶
Initialize with parameters.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → WeatherDataLoader[source]¶
lazy_load() → Iterator[Document][source]¶
Lazily load weather data for the given locations.
load() → List[Document][source]¶
Load weather data for the given locations.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WeatherDataLoader¶
Weather | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html |
12569fb3d4fd-0 | langchain.document_loaders.url_selenium.SeleniumURLLoader¶
class langchain.document_loaders.url_selenium.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]¶
Load HTML pages with Selenium and parse with Unstructured.
This is useful for loading pages that require javascript to render.
urls¶
List of URLs to load.
Type
List[str]
continue_on_failure¶
If True, continue loading other URLs on failure.
Type
bool
browser¶
The browser to use, either ‘chrome’ or ‘firefox’.
Type
str
binary_location¶
The location of the browser binary.
Type
Optional[str]
executable_path¶
The path to the browser executable.
Type
Optional[str]
headless¶
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
Load a list of URLs using Selenium and unstructured.
Methods
__init__(urls[, continue_on_failure, ...])
Load a list of URLs using Selenium and unstructured.
lazy_load()
A lazy loader for Documents.
load()
Load the specified URLs using Selenium and create Document instances.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html |
12569fb3d4fd-1 | Load a list of URLs using Selenium and unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SeleniumURLLoader¶
URL | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html |
5886296f3f67-0 | langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, doc_content_chars_max: Optional[int] = None, **kwargs: Any)[source]¶
Load a query result from Arxiv.
The loader converts the original PDF format into the text.
Parameters
ArxivAPIWrapper. (Supports all arguments of) –
Methods
__init__(query[, doc_content_chars_max])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, doc_content_chars_max: Optional[int] = None, **kwargs: Any)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ArxivLoader¶
Arxiv | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html |
8423677557e8-0 | langchain.document_loaders.sharepoint.SharePointLoader¶
class langchain.document_loaders.sharepoint.SharePointLoader[source]¶
Bases: O365BaseLoader
Load from SharePoint.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param auth_with_token: bool = False¶
Whether to authenticate with a token or not. Defaults to False.
param chunk_size: Union[int, str] = 5242880¶
Number of bytes to retrieve from each api call to the server. int or ‘auto’.
param document_library_id: str [Required]¶
The ID of the SharePoint document library to load data from.
param folder_path: Optional[str] = None¶
The path to the folder to load data from.
param object_ids: Optional[List[str]] = None¶
The IDs of the objects to load data from.
param settings: _O365Settings [Optional]¶
Settings for the Office365 API client.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
8423677557e8-1 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
8423677557e8-2 | load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using SharePointLoader¶
Microsoft SharePoint | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
8796a25a31e6-0 | langchain.document_loaders.odt.UnstructuredODTLoader¶
class langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load OpenOffice ODT files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredODTLoader
loader = UnstructuredODTLoader(“example.odt”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-odt
Parameters
file_path – The path to the file to load.
mode – The mode to use when loading the file. Can be one of “single”,
“multi”, or “all”. Default is “single”.
**unstructured_kwargs – Any kwargs to pass to the unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the file to load.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the file to load.
mode – The mode to use when loading the file. Can be one of “single”, | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
8796a25a31e6-1 | mode – The mode to use when loading the file. Can be one of “single”,
“multi”, or “all”. Default is “single”.
**unstructured_kwargs – Any kwargs to pass to the unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredODTLoader¶
Open Document Format (ODT) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
175933fcaf66-0 | langchain.document_loaders.chatgpt.ChatGPTLoader¶
class langchain.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]¶
Load conversations from exported ChatGPT data.
Initialize a class object.
Parameters
log_file – Path to the log file
num_logs – Number of logs to load. If 0, load all logs.
Methods
__init__(log_file[, num_logs])
Initialize a class object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(log_file: str, num_logs: int = - 1)[source]¶
Initialize a class object.
Parameters
log_file – Path to the log file
num_logs – Number of logs to load. If 0, load all logs.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ChatGPTLoader¶
OpenAI
ChatGPT Data | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.ChatGPTLoader.html |
e1443e3cf1b5-0 | langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader¶
class langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader(file_path: str, *, transcript_format: TranscriptFormat = TranscriptFormat.TEXT, config: Optional[assemblyai.TranscriptionConfig] = None, api_key: Optional[str] = None)[source]¶
Loader for AssemblyAI audio transcripts.
It uses the AssemblyAI API to transcribe audio files
and loads the transcribed text into one or more Documents,
depending on the specified format.
To use, you should have the assemblyai python package installed, and the
environment variable ASSEMBLYAI_API_KEY set with your API key.
Alternatively, the API key can also be passed as an argument.
Audio files can be specified via an URL or a local file path.
Initializes the AssemblyAI AudioTranscriptLoader.
Parameters
file_path – An URL or a local file path.
transcript_format – Transcript format to use.
See class TranscriptFormat for more info.
config – Transcription options and features. If None is given,
the Transcriber’s default configuration will be used.
api_key – AssemblyAI API key.
Methods
__init__(file_path, *[, transcript_format, ...])
Initializes the AssemblyAI AudioTranscriptLoader.
lazy_load()
A lazy loader for Documents.
load()
Transcribes the audio file and loads the transcript into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, transcript_format: TranscriptFormat = TranscriptFormat.TEXT, config: Optional[assemblyai.TranscriptionConfig] = None, api_key: Optional[str] = None)[source]¶
Initializes the AssemblyAI AudioTranscriptLoader.
Parameters
file_path – An URL or a local file path. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader.html |
e1443e3cf1b5-1 | Parameters
file_path – An URL or a local file path.
transcript_format – Transcript format to use.
See class TranscriptFormat for more info.
config – Transcription options and features. If None is given,
the Transcriber’s default configuration will be used.
api_key – AssemblyAI API key.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Transcribes the audio file and loads the transcript into documents.
It uses the AssemblyAI API to transcribe the audio file and blocks until
the transcription is finished.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AssemblyAIAudioTranscriptLoader¶
AssemblyAI Audio Transcripts | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader.html |
776aaa69807e-0 | langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Load PDF files using PyMuPDF.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load(**kwargs)
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(**kwargs: Optional[Any]) → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html |
d49262d2c791-0 | langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter¶
class langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter(code: str)[source]¶
Code segmenter for JavaScript.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter.html |
a91c6528531c-0 | langchain.document_loaders.tsv.UnstructuredTSVLoader¶
class langchain.document_loaders.tsv.UnstructuredTSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load TSV files using Unstructured.
Like other
Unstructured loaders, UnstructuredTSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the TSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.tsv import UnstructuredTSVLoader
loader = UnstructuredTSVLoader(“stanley-cups.tsv”, mode=”elements”)
docs = loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredTSVLoader¶
TSV | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tsv.UnstructuredTSVLoader.html |
db3797e511b5-0 | langchain.document_loaders.facebook_chat.FacebookChatLoader¶
class langchain.document_loaders.facebook_chat.FacebookChatLoader(path: str)[source]¶
Load Facebook Chat messages directory dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FacebookChatLoader¶
Facebook Chat | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.FacebookChatLoader.html |
f599eacc9d90-0 | langchain.document_loaders.hn.HNLoader¶
class langchain.document_loaders.hn.HNLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load Hacker News data.
It loads data from either main page results or the comments page.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Get important HN webpage information.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_comments(soup_info)
Load comments from a HN post. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
f599eacc9d90-1 | load_comments(soup_info)
Load comments from a HN post.
load_results(soup)
Load items from an HN page.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Get important HN webpage information. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
f599eacc9d90-2 | load() → List[Document][source]¶
Get important HN webpage information.
HN webpage components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_comments(soup_info: Any) → List[Document][source]¶
Load comments from a HN post.
load_results(soup: Any) → List[Document][source]¶
Load items from an HN page.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using HNLoader¶
Hacker News | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
42181c622234-0 | langchain.document_loaders.assemblyai.TranscriptFormat¶
class langchain.document_loaders.assemblyai.TranscriptFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Transcript format to use for the document loader.
TEXT = 'text'¶
One document with the transcription text
SENTENCES = 'sentences'¶
Multiple documents, splits the transcription by each sentence
PARAGRAPHS = 'paragraphs'¶
Multiple documents, splits the transcription by each paragraph
SUBTITLES_SRT = 'subtitles_srt'¶
One document with the transcript exported in SRT subtitles format
SUBTITLES_VTT = 'subtitles_vtt'¶
One document with the transcript exported in VTT subtitles format
Examples using TranscriptFormat¶
AssemblyAI Audio Transcripts | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.assemblyai.TranscriptFormat.html |
319be49deb80-0 | langchain.document_loaders.helpers.FileEncoding¶
class langchain.document_loaders.helpers.FileEncoding(encoding: Optional[str], confidence: float, language: Optional[str])[source]¶
File encoding as the NamedTuple.
Create new instance of FileEncoding(encoding, confidence, language)
Attributes
confidence
The confidence of the encoding.
encoding
The encoding of the file.
language
The language of the file.
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.FileEncoding.html |
c47f9c6cee2f-0 | langchain.document_loaders.base.BaseLoader¶
class langchain.document_loaders.base.BaseLoader[source]¶
Interface for Document Loader.
Implementations should implement the lazy-loading method using generators
to avoid loading all Documents into memory at once.
The load method will remain as is for backwards compatibility, but its
implementation should be just list(self.lazy_load()).
Methods
__init__()
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__()¶
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
abstract load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BaseLoader¶
Indexing | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseLoader.html |
d7b2bbc2bade-0 | langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter¶
class langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]¶
Abstract class for the code segmenter.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
abstract extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
abstract simplify_code() → str[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html |
fcf95cc8b1f2-0 | langchain.document_loaders.org_mode.UnstructuredOrgModeLoader¶
class langchain.document_loaders.org_mode.UnstructuredOrgModeLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Org-Mode files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredOrgModeLoader
loader = UnstructuredOrgModeLoader(“example.org”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-org
Parameters
file_path – The path to the file to load.
mode – The mode to load the file from. Default is “single”.
**unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the file to load.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the file to load.
mode – The mode to load the file from. Default is “single”.
**unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html |
fcf95cc8b1f2-1 | **unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredOrgModeLoader¶
Org-mode | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html |
ce885ac758e8-0 | langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Load .srt (subtitle) files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load using pysrt file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load using pysrt file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SRTLoader¶
Subtitle | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html |
2a374f8f1086-0 | langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader¶
class langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Load PDF files as HTML content using PDFMiner.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html |
cdeafcf4b7a2-0 | langchain.document_loaders.twitter.TwitterTweetLoader¶
class langchain.document_loaders.twitter.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
Load Twitter tweets.
Read tweets of the user’s Twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Methods
__init__(auth_handler, twitter_users[, ...])
from_bearer_token(oauth2_bearer_token, ...)
Create a TwitterTweetLoader from OAuth2 bearer token.
from_secrets(access_token, ...[, number_tweets])
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load()
A lazy loader for Documents.
load()
Load tweets.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load tweets. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html |
cdeafcf4b7a2-1 | load() → List[Document][source]¶
Load tweets.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TwitterTweetLoader¶
Twitter | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html |
f25e4efd819f-0 | langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Load from Azure Blob Storage container.
Initialize with connection string, container and blob prefix.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
prefix
Prefix for blob names.
Methods
__init__(conn_str, container[, prefix])
Initialize with connection string, container and blob prefix.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conn_str: str, container: str, prefix: str = '')[source]¶
Initialize with connection string, container and blob prefix.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AzureBlobStorageContainerLoader¶
Azure Blob Storage
Azure Blob Storage Container | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html |
75e1c97a003e-0 | langchain.document_loaders.larksuite.LarkSuiteDocLoader¶
class langchain.document_loaders.larksuite.LarkSuiteDocLoader(domain: str, access_token: str, document_id: str)[source]¶
Load from LarkSuite (FeiShu).
Initialize with domain, access_token (tenant / user), and document_id.
Parameters
domain – The domain to load the LarkSuite.
access_token – The access_token to use.
document_id – The document_id to load.
Methods
__init__(domain, access_token, document_id)
Initialize with domain, access_token (tenant / user), and document_id.
lazy_load()
Lazy load LarkSuite (FeiShu) document.
load()
Load LarkSuite (FeiShu) document.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(domain: str, access_token: str, document_id: str)[source]¶
Initialize with domain, access_token (tenant / user), and document_id.
Parameters
domain – The domain to load the LarkSuite.
access_token – The access_token to use.
document_id – The document_id to load.
lazy_load() → Iterator[Document][source]¶
Lazy load LarkSuite (FeiShu) document.
load() → List[Document][source]¶
Load LarkSuite (FeiShu) document.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using LarkSuiteDocLoader¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html |
75e1c97a003e-1 | Returns
List of Documents.
Examples using LarkSuiteDocLoader¶
LarkSuite (FeiShu) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html |
3b2363290c21-0 | langchain.document_loaders.notebook.NotebookLoader¶
class langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Load Jupyter notebook (.ipynb) files.
Initialize with a path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
Methods
__init__(path[, include_outputs, ...])
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Initialize with a path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html |
3b2363290c21-1 | load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotebookLoader¶
Jupyter Notebook
Notebook | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html |
74672821757c-0 | langchain.document_loaders.notion.NotionDirectoryLoader¶
class langchain.document_loaders.notion.NotionDirectoryLoader(path: str)[source]¶
Load Notion directory dump.
Initialize with a file path.
Methods
__init__(path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotionDirectoryLoader¶
Notion DB
Notion DB 1/2
Perform context-aware text splitting | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notion.NotionDirectoryLoader.html |
5f2a214d5ae3-0 | langchain.document_loaders.s3_file.S3FileLoader¶
class langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str, *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Load from Amazon AWS S3 file.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether or not to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether or not to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
5f2a214d5ae3-1 | endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
Methods
__init__(bucket, key, *[, region_name, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
5f2a214d5ae3-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, key: str, *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
key – The key of the S3 object.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether or not to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether or not to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
5f2a214d5ae3-3 | client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3FileLoader¶
AWS S3 Directory
AWS S3 File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html |
f10787efee46-0 | langchain.document_loaders.unstructured.satisfies_min_unstructured_version¶
langchain.document_loaders.unstructured.satisfies_min_unstructured_version(min_version: str) → bool[source]¶
Check if the installed Unstructured version exceeds the minimum version
for the feature in question. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.satisfies_min_unstructured_version.html |
2a033be461bb-0 | langchain.document_loaders.youtube.YoutubeLoader¶
class langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Load YouTube transcripts.
Initialize with YouTube video ID.
Methods
__init__(video_id[, add_video_info, ...])
Initialize with YouTube video ID.
extract_video_id(youtube_url)
Extract video id from common YT urls.
from_youtube_url(youtube_url, **kwargs)
Given youtube URL, load video.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Initialize with YouTube video ID.
static extract_video_id(youtube_url: str) → str[source]¶
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → YoutubeLoader[source]¶
Given youtube URL, load video.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using YoutubeLoader¶
YouTube
YouTube transcripts | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html |
480f01ac7cf9-0 | langchain.document_loaders.parsers.grobid.ServerUnavailableException¶
class langchain.document_loaders.parsers.grobid.ServerUnavailableException[source]¶
Exception raised when the Grobid server is unavailable. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.ServerUnavailableException.html |
6f73479ff416-0 | langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Load PDF using pypdfium2 and chunks at character level.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html |
8054a4ba84ce-0 | langchain.document_loaders.notebook.remove_newlines¶
langchain.document_loaders.notebook.remove_newlines(x: Any) → Any[source]¶
Recursively remove newlines, no matter the data structure they are stored in. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html |
66e9735965fb-0 | langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Load a JSON file using a jq schema.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
Methods
__init__(file_path, jq_schema[, ...])
Initialize the JSONLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the JSON file.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html |
66e9735965fb-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the JSON file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html |
1ff66568f481-0 | langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator¶
class langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator(remove_selectors: Optional[List[str]] = None)[source]¶
Evaluates the page HTML content using the unstructured library.
Initialize UnstructuredHtmlEvaluator.
Methods
__init__([remove_selectors])
Initialize UnstructuredHtmlEvaluator.
evaluate(page, browser, response)
Synchronously process the HTML content of the page.
evaluate_async(page, browser, response)
Asynchronously process the HTML content of the page.
__init__(remove_selectors: Optional[List[str]] = None)[source]¶
Initialize UnstructuredHtmlEvaluator.
evaluate(page: Page, browser: Browser, response: Response) → str[source]¶
Synchronously process the HTML content of the page.
async evaluate_async(page: AsyncPage, browser: AsyncBrowser, response: AsyncResponse) → str[source]¶
Asynchronously process the HTML content of the page. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator.html |
821e1b5a598f-0 | langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Load from Stripe API.
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
Methods
__init__(resource[, access_token])
Initialize with a resource and an access token.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, access_token: Optional[str] = None) → None[source]¶
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using StripeLoader¶
Stripe | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html |
13d92b2e3734-0 | langchain.document_loaders.psychic.PsychicLoader¶
class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Load from Psychic.dev.
Initialize with API key, connector id, and account id.
Parameters
api_key – The Psychic API key.
account_id – The Psychic account id.
connector_id – The Psychic connector id.
Methods
__init__(api_key, account_id[, connector_id])
Initialize with API key, connector id, and account id.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Initialize with API key, connector id, and account id.
Parameters
api_key – The Psychic API key.
account_id – The Psychic account id.
connector_id – The Psychic connector id.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PsychicLoader¶
Psychic | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html |
5db4c07a9186-0 | langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader¶
class langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Load from Tencent Cloud COS file.
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
Methods
__init__(conf, bucket, key)
Initialize with COS config, bucket and key name.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conf: Any, bucket: str, key: str)[source]¶
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TencentCOSFileLoader¶
Tencent COS File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html |
50feec3b4719-0 | langchain.document_loaders.brave_search.BraveSearchLoader¶
class langchain.document_loaders.brave_search.BraveSearchLoader(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Load with Brave Search engine.
Initializes the BraveLoader.
Parameters
query – The query to search for.
api_key – The API key to use.
search_kwargs – The search kwargs to use.
Methods
__init__(query, api_key[, search_kwargs])
Initializes the BraveLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Initializes the BraveLoader.
Parameters
query – The query to search for.
api_key – The API key to use.
search_kwargs – The search kwargs to use.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BraveSearchLoader¶
Brave Search | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.brave_search.BraveSearchLoader.html |
6386046f1533-0 | langchain.document_loaders.airbyte.AirbyteShopifyLoader¶
class langchain.document_loaders.airbyte.AirbyteShopifyLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Shopify using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteShopifyLoader.html |
6386046f1533-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteShopifyLoader¶
Airbyte Shopify | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteShopifyLoader.html |
fd72353b0d94-0 | langchain.document_loaders.csv_loader.UnstructuredCSVLoader¶
class langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load CSV files using Unstructured.
Like other
Unstructured loaders, UnstructuredCSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the CSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.csv_loader import UnstructuredCSVLoader
loader = UnstructuredCSVLoader(“stanley-cups.csv”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the CSV file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html |
fd72353b0d94-1 | A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredCSVLoader¶
CSV | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html |
630739f88e7c-0 | langchain.document_loaders.s3_directory.S3DirectoryLoader¶
class langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '', *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Load from Amazon AWS S3 directory.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
prefix – The prefix of the S3 key. Defaults to “”.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
630739f88e7c-1 | different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
Methods
__init__(bucket[, prefix, region_name, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
630739f88e7c-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, prefix: str = '', *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
prefix – The prefix of the S3 key. Defaults to “”.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
630739f88e7c-3 | client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3DirectoryLoader¶
AWS S3 Directory | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
f8ef841249de-0 | langchain.document_loaders.mhtml.MHTMLLoader¶
class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Parse MHTML files with BeautifulSoup.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – Path to file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when getting the text
from the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '') → None[source]¶
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – Path to file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when getting the text
from the soup.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
f8ef841249de-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MHTMLLoader¶
mhtml | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
75f03c9aded1-0 | langchain.document_loaders.unstructured.UnstructuredFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured.
The file loader uses the
unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
75f03c9aded1-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileLoader¶
Unstructured
Unstructured File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
d9b4ce8a70df-0 | langchain.document_loaders.gitbook.GitbookLoader¶
class langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Initialize with web page and whether to load all paths.
Parameters
web_page – The web page to load or the starting point from where
relative paths are discovered.
load_all_paths – If set to True, all relative paths in the navbar
are loaded instead of only web_page.
base_url – If load_all_paths is True, the relative paths are
appended to this base url. Defaults to web_page.
content_selector – The CSS selector for the content to load.
Defaults to “main”.
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
Attributes
web_path
Methods
__init__(web_page[, load_all_paths, ...])
Initialize with web page and whether to load all paths.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Fetch text from one single GitBook page.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser]) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
d9b4ce8a70df-1 | scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Initialize with web page and whether to load all paths.
Parameters
web_page – The web page to load or the starting point from where
relative paths are discovered.
load_all_paths – If set to True, all relative paths in the navbar
are loaded instead of only web_page.
base_url – If load_all_paths is True, the relative paths are
appended to this base url. Defaults to web_page.
content_selector – The CSS selector for the content to load.
Defaults to “main”.
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Fetch text from one single GitBook page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
d9b4ce8a70df-2 | List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using GitbookLoader¶
GitBook | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
ac610b6653db-0 | langchain.document_loaders.blob_loaders.schema.BlobLoader¶
class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
Methods
__init__()
yield_blobs()
A lazy loader for raw data represented by LangChain's Blob object.
__init__()¶
abstract yield_blobs() → Iterable[Blob][source]¶
A lazy loader for raw data represented by LangChain’s Blob object.
Returns
A generator over blobs | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html |
679c349aa7d9-0 | langchain.document_loaders.googledrive.GoogleDriveLoader¶
class langchain.document_loaders.googledrive.GoogleDriveLoader[source]¶
Bases: BaseLoader, BaseModel
Load Google Docs from Google Drive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')¶
Path to the credentials file.
param document_ids: Optional[List[str]] = None¶
The document ids to load from.
param file_ids: Optional[List[str]] = None¶
The file ids to load from.
param file_loader_cls: Any = None¶
The file loader class to use.
param file_loader_kwargs: Dict[str, Any] = {}¶
The file loader kwargs to use.
param file_types: Optional[Sequence[str]] = None¶
The file types to load. Only applies when folder_id is given.
param folder_id: Optional[str] = None¶
The folder id to load from.
param load_trashed_files: bool = False¶
Whether to load trashed files. Only applies when folder_id is given.
param recursive: bool = False¶
Whether to load recursively. Only applies when folder_id is given.
param service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')¶
Path to the service account key file.
param token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')¶
Path to the token file.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
679c349aa7d9-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
679c349aa7d9-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
679c349aa7d9-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using GoogleDriveLoader¶
Google Drive | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html |
b5f2798e5fbe-0 | langchain.document_loaders.airbyte.AirbyteGongLoader¶
class langchain.document_loaders.airbyte.AirbyteGongLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Gong using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteGongLoader.html |
b5f2798e5fbe-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteGongLoader¶
Airbyte Gong | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteGongLoader.html |
3256ac8a3e5f-0 | langchain.document_loaders.slack_directory.SlackDirectoryLoader¶
class langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Load from a Slack directory dump.
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
Methods
__init__(zip_path[, workspace_url])
Initialize the SlackDirectoryLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the Slack directory dump.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the Slack directory dump.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SlackDirectoryLoader¶
Slack | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html |
b2d26746c191-0 | langchain.document_loaders.discord.DiscordChatLoader¶
class langchain.document_loaders.discord.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶
Load Discord chat logs.
Initialize with a Pandas DataFrame containing chat logs.
Parameters
chat_log – Pandas DataFrame containing chat logs.
user_id_col – Name of the column containing the user ID. Defaults to “ID”.
Methods
__init__(chat_log[, user_id_col])
Initialize with a Pandas DataFrame containing chat logs.
lazy_load()
A lazy loader for Documents.
load()
Load all chat messages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶
Initialize with a Pandas DataFrame containing chat logs.
Parameters
chat_log – Pandas DataFrame containing chat logs.
user_id_col – Name of the column containing the user ID. Defaults to “ID”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load all chat messages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DiscordChatLoader¶
Discord | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.discord.DiscordChatLoader.html |
e5ba4dd862ec-0 | langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load IMSDb webpages.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser]) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
e5ba4dd862ec-1 | scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
e5ba4dd862ec-2 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using IMSDbLoader¶
IMSDb | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
f0fd7e145840-0 | langchain.document_loaders.pdf.UnstructuredPDFLoader¶
class langchain.document_loaders.pdf.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load PDF files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-pdf
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html |
03a6e30f302f-0 | langchain.document_loaders.notiondb.NotionDBLoader¶
class langchain.document_loaders.notiondb.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]¶
Load from Notion DB.
Reads content from pages within a Notion Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
Defaults to 10.
Initialize with parameters.
Methods
__init__(integration_token, database_id[, ...])
Initialize with parameters.
lazy_load()
A lazy loader for Documents.
load()
Load documents from the Notion database.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_page(page_summary)
Read a page.
__init__(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10) → None[source]¶
Initialize with parameters.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_page(page_summary: Dict[str, Any]) → Document[source]¶
Read a page.
Parameters
page_summary – Page summary from Notion API. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html |
03a6e30f302f-1 | Read a page.
Parameters
page_summary – Page summary from Notion API.
Examples using NotionDBLoader¶
Notion DB
Notion DB 2/2 | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html |
6a066e21c09d-0 | langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader¶
class langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Load PySpark DataFrames.
Initialize with a Spark DataFrame object.
Parameters
spark_session – The SparkSession object.
df – The Spark DataFrame object.
page_content_column – The name of the column containing the page content.
Defaults to “text”.
fraction_of_memory – The fraction of memory to use. Defaults to 0.1.
Methods
__init__([spark_session, df, ...])
Initialize with a Spark DataFrame object.
get_num_rows()
Gets the number of "feasible" rows for the DataFrame
lazy_load()
A lazy loader for document content.
load()
Load from the dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Initialize with a Spark DataFrame object.
Parameters
spark_session – The SparkSession object.
df – The Spark DataFrame object.
page_content_column – The name of the column containing the page content.
Defaults to “text”.
fraction_of_memory – The fraction of memory to use. Defaults to 0.1.
get_num_rows() → Tuple[int, int][source]¶
Gets the number of “feasible” rows for the DataFrame
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html |
6a066e21c09d-1 | A lazy loader for document content.
load() → List[Document][source]¶
Load from the dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PySparkDataFrameLoader¶
PySpark DataFrame Loader | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html |
5a9be0d1541a-0 | langchain.document_loaders.parsers.docai.DocAIParser¶
class langchain.document_loaders.parsers.docai.DocAIParser(*, client: Optional[DocumentProcessorServiceClient] = None, location: Optional[str] = None, gcs_output_path: Optional[str] = None, processor_name: Optional[str] = None)[source]¶
Initializes the parser.
Parameters
client – a DocumentProcessorServiceClient to use
location – a GCP location where a DOcAI parser is located
gcs_output_path – a path on GCS to store parsing results
processor_name – name of a processor
You should provide either a client or location (and then a clientwould be instantiated).
Methods
__init__(*[, client, location, ...])
Initializes the parser.
batch_parse(blobs[, gcs_output_path, ...])
Parses a list of blobs lazily.
docai_parse(blobs, *[, gcs_output_path, ...])
Runs Google DocAI PDF parser on a list of blobs.
get_results(operations)
is_running(operations)
lazy_parse(blob)
Parses a blob lazily.
operations_from_names(operation_names)
Initializes Long-Running Operations from their names.
parse(blob)
Eagerly parse the blob into a document or documents.
parse_from_results(results)
__init__(*, client: Optional[DocumentProcessorServiceClient] = None, location: Optional[str] = None, gcs_output_path: Optional[str] = None, processor_name: Optional[str] = None)[source]¶
Initializes the parser.
Parameters
client – a DocumentProcessorServiceClient to use
location – a GCP location where a DOcAI parser is located
gcs_output_path – a path on GCS to store parsing results
processor_name – name of a processor | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
5a9be0d1541a-1 | processor_name – name of a processor
You should provide either a client or location (and then a clientwould be instantiated).
batch_parse(blobs: Sequence[Blob], gcs_output_path: Optional[str] = None, timeout_sec: int = 3600, check_in_interval_sec: int = 60) → Iterator[Document][source]¶
Parses a list of blobs lazily.
Parameters
blobs – a list of blobs to parse
gcs_output_path – a path on GCS to store parsing results
timeout_sec – a timeout to wait for DocAI to complete, in seconds
check_in_interval_sec – an interval to wait until next check
whether parsing operations have been completed, in seconds
This is a long-running operations! A recommended way is to decoupleparsing from creating Langchain Documents:
>>> operations = parser.docai_parse(blobs, gcs_path)
>>> parser.is_running(operations)
You can get operations names and save them:
>>> names = [op.operation.name for op in operations]
And when all operations are finished, you can use their results:
>>> operations = parser.operations_from_names(operation_names)
>>> results = parser.get_results(operations)
>>> docs = parser.parse_from_results(results)
docai_parse(blobs: Sequence[Blob], *, gcs_output_path: Optional[str] = None, batch_size: int = 4000, enable_native_pdf_parsing: bool = True) → List[Operation][source]¶
Runs Google DocAI PDF parser on a list of blobs.
Parameters
blobs – a list of blobs to be parsed
gcs_output_path – a path (folder) on GCS to store results
batch_size – amount of documents per batch
enable_native_pdf_parsing – a config option for the parser | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
5a9be0d1541a-2 | enable_native_pdf_parsing – a config option for the parser
DocAI has a limit on the amount of documents per batch, that’s why split abatch into mini-batches. Parsing is an async long-running operation
on Google Cloud and results are stored in a output GCS bucket.
get_results(operations: List[Operation]) → List[DocAIParsingResults][source]¶
is_running(operations: List[Operation]) → bool[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Parses a blob lazily.
Parameters
blobs – a Blob to parse
This is a long-running operations! A recommended way is to batchdocuments together and use batch_parse method.
operations_from_names(operation_names: List[str]) → List[Operation][source]¶
Initializes Long-Running Operations from their names.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
parse_from_results(results: List[DocAIParsingResults]) → Iterator[Document][source]¶
Examples using DocAIParser¶
docai.md | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
4fe6cf0a2ddf-0 | langchain.document_loaders.conllu.CoNLLULoader¶
class langchain.document_loaders.conllu.CoNLLULoader(file_path: str)[source]¶
Load CoNLL-U files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CoNLLULoader¶
CoNLL-U | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.conllu.CoNLLULoader.html |
3487e9971f9d-0 | langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
Generic Document Loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document_loaders import GenericLoader
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(
path="path/to/directory",
glob="**/[!.]*",
suffixes=[".pdf"],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
.. code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="**/*.txt")
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="**/[!.]*")
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="*")
Example instantiations to change which parser is used:
.. code-block:: python
from langchain.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
"/path/to/dir",
glob="**/*.pdf",
parser=PyPDFParser()
)
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser)
A generic document loader. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
3487e9971f9d-1 | Methods
__init__(blob_loader, blob_parser)
A generic document loader.
from_filesystem(path, *[, glob, exclude, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser) → None[source]¶
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', exclude: Sequence[str] = (), suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') → GenericLoader[source]¶
Create a generic document loader using a filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
exclude – A list of patterns to exclude from the loader.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
Returns
A generic document loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
3487e9971f9d-2 | load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load all documents and split them into sentences.
Examples using GenericLoader¶
Grobid
Loading documents from a YouTube url
Source Code
Set env var OPENAI_API_KEY or load from a .env file | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
1471385abfe0-0 | langchain.document_loaders.bibtex.BibtexLoader¶
class langchain.document_loaders.bibtex.BibtexLoader(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Load a bibtex file.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Initialize the BibtexLoader.
Parameters
file_path – Path to the bibtex file.
parser – The parser to use. If None, a default parser is used.
max_docs – Max number of associated documents to load. Use -1 means
no limit.
max_content_chars – Maximum number of characters to load from the PDF.
load_extra_metadata – Whether to load extra metadata from the PDF.
file_pattern – Regex pattern to match the file name in the bibtex.
Methods
__init__(file_path, *[, parser, max_docs, ...])
Initialize the BibtexLoader.
lazy_load()
Load bibtex file using bibtexparser and get the article texts plus the article metadata.
load()
Load bibtex file documents from the given bibtex file path.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
1471385abfe0-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Initialize the BibtexLoader.
Parameters
file_path – Path to the bibtex file.
parser – The parser to use. If None, a default parser is used.
max_docs – Max number of associated documents to load. Use -1 means
no limit.
max_content_chars – Maximum number of characters to load from the PDF.
load_extra_metadata – Whether to load extra metadata from the PDF.
file_pattern – Regex pattern to match the file name in the bibtex.
lazy_load() → Iterator[Document][source]¶
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() → List[Document][source]¶
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path – the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
1471385abfe0-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BibtexLoader¶
BibTeX | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
Subsets and Splits