id
stringlengths 14
15
| text
stringlengths 44
2.47k
| source
stringlengths 61
181
|
---|---|---|
7c5f59a4c716-0 | langchain.document_loaders.obsidian.ObsidianLoader¶
class langchain.document_loaders.obsidian.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Load Obsidian files from directory.
Initialize with a path.
Parameters
path – Path to the directory containing the Obsidian files.
encoding – Charset encoding, defaults to “UTF-8”
collect_metadata – Whether to collect metadata from the front matter.
Defaults to True.
Attributes
DATAVIEW_INLINE_BRACKET_REGEX
DATAVIEW_INLINE_PAREN_REGEX
DATAVIEW_LINE_REGEX
FRONT_MATTER_REGEX
TAG_REGEX
Methods
__init__(path[, encoding, collect_metadata])
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Initialize with a path.
Parameters
path – Path to the directory containing the Obsidian files.
encoding – Charset encoding, defaults to “UTF-8”
collect_metadata – Whether to collect metadata from the front matter.
Defaults to True.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ObsidianLoader¶
Obsidian | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obsidian.ObsidianLoader.html |
1ac175f56159-0 | langchain.document_loaders.excel.UnstructuredExcelLoader¶
class langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft Excel files using Unstructured.
Like other
Unstructured loaders, UnstructuredExcelLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, each sheet in the Excel file will be a an Unstructured Table
element. If you use the loader in “elements” mode, an
HTML representation of the table will be available in the
“text_as_html” key in the document metadata.
Examples
from langchain.document_loaders.excel import UnstructuredExcelLoader
loader = UnstructuredExcelLoader(“stanley-cups.xlsd”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the Microsoft Excel file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
lazy_load() → Iterator[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html |
1ac175f56159-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredExcelLoader¶
Microsoft Excel | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html |
5c361ee9dcee-0 | langchain.document_loaders.parsers.pdf.PDFPlumberParser¶
class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False)[source]¶
Parse PDF with PDFPlumber.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
dedupe – Avoiding the error of duplicate characters if dedupe=True.
Methods
__init__([text_kwargs, dedupe])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False) → None[source]¶
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
dedupe – Avoiding the error of duplicate characters if dedupe=True.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html |
a35f36b5df45-0 | langchain.document_loaders.airbyte.AirbyteStripeLoader¶
class langchain.document_loaders.airbyte.AirbyteStripeLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Stripe using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
a35f36b5df45-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteStripeLoader¶
Airbyte Question Answering
Airbyte Stripe | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
65282ea0b070-0 | langchain.document_loaders.web_base.WebBaseLoader¶
class langchain.document_loaders.web_base.WebBaseLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load HTML pages using urllib and parse them with `BeautifulSoup’.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load text from the url(s) in web_path.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
65282ea0b070-1 | scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None[source]¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document][source]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any[source]¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document][source]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load text from the url(s) in web_path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
65282ea0b070-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any[source]¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]¶
Fetch all urls, then return soups for all results.
Examples using WebBaseLoader¶
RePhraseQueryRetriever
Ollama
Vectorstore
Zep
WebBaseLoader
MergeDocLoader
Set env var OPENAI_API_KEY or load from a .env file:
Set env var OPENAI_API_KEY or load from a .env file
Question Answering
Use local LLMs
MultiQueryRetriever
Combine agents and vector stores | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
638427d99cf6-0 | langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader¶
class langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Zendesk Support using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
638427d99cf6-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteZendeskSupportLoader¶
Airbyte Zendesk Support | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
7a184886c1d9-0 | langchain.document_loaders.roam.RoamLoader¶
class langchain.document_loaders.roam.RoamLoader(path: str)[source]¶
Load Roam files from a directory.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RoamLoader¶
Roam | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.roam.RoamLoader.html |
ea8bb2e02a96-0 | langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OutlookMessageLoader¶
Email | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html |
abab2d01f6bb-0 | langchain.document_loaders.reddit.RedditPostsLoader¶
class langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Load Reddit posts.
Read posts on a subreddit.
First, you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
Methods
__init__(client_id, client_secret, ...[, ...])
Initialize with client_id, client_secret, user_agent, search_queries, mode,
lazy_load()
A lazy loader for Documents.
load()
Load reddits.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html |
abab2d01f6bb-1 | user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load reddits.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RedditPostsLoader¶
Reddit | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html |
880541dd2c81-0 | langchain.document_loaders.dataframe.DataFrameLoader¶
class langchain.document_loaders.dataframe.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Pandas DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Pandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Parameters
data_frame – Pandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document]¶
Lazy load records from dataframe.
load() → List[Document]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DataFrameLoader¶
Pandas DataFrame | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.DataFrameLoader.html |
c6141a9710f6-0 | langchain.document_loaders.embaas.EmbaasBlobLoader¶
class langchain.document_loaders.embaas.EmbaasBlobLoader[source]¶
Bases: BaseEmbaasLoader, BaseBlobParser
Load Embaas blob.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the Embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
The API key for the Embaas document extraction API.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the Embaas document extraction API. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
c6141a9710f6-1 | Additional parameters to pass to the Embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
c6141a9710f6-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Parses the blob lazily.
Parameters
blob – The blob to parse.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
c6141a9710f6-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbaasBlobLoader¶
Embaas | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
a45496e406a8-0 | langchain.document_loaders.bilibili.BiliBiliLoader¶
class langchain.document_loaders.bilibili.BiliBiliLoader(video_urls: List[str])[source]¶
Load BiliBili video transcripts.
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
Methods
__init__(video_urls)
Initialize with bilibili url.
lazy_load()
A lazy loader for Documents.
load()
Load Documents from bilibili url.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(video_urls: List[str])[source]¶
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load Documents from bilibili url.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BiliBiliLoader¶
BiliBili | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bilibili.BiliBiliLoader.html |
134b3d4f66f4-0 | langchain.document_loaders.dropbox.DropboxLoader¶
class langchain.document_loaders.dropbox.DropboxLoader[source]¶
Bases: BaseLoader, BaseModel
Load files from Dropbox.
In addition to common files such as text and PDF files, it also supports
Dropbox Paper files.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param dropbox_access_token: str [Required]¶
Dropbox access token.
param dropbox_file_paths: Optional[List[str]] = None¶
The file paths to load from.
param dropbox_folder_path: Optional[str] = None¶
The folder path to load from.
param recursive: bool = False¶
Flag to indicate whether to load files recursively from subfolders.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
134b3d4f66f4-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
134b3d4f66f4-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using DropboxLoader¶
Dropbox | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
6a04439d5b54-0 | langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader[source]¶
Bases: BaseLoader, BaseModel
Load from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
The Docugami API access token to use.
param api: str = 'https://api.docugami.com/v1preview1'¶
The Docugami API endpoint to use.
param docset_id: Optional[str] = None¶
The Docugami API docset ID to use.
param document_ids: Optional[Sequence[str]] = None¶
The Docugami API document IDs to use.
param file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None¶
The local file paths to use.
param min_chunk_size: int = 32¶
The minimum chunk size to use when parsing DGML. Defaults to 32.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
6a04439d5b54-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
6a04439d5b54-2 | load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using DocugamiLoader¶
Docugami | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
e8cd759e37c1-0 | langchain.document_loaders.csv_loader.CSVLoader¶
class langchain.document_loaders.csv_loader.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]¶
Load a CSV file into a list of Documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the document’s page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all documents by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Parameters
file_path – The path to the CSV file.
source_column – The name of the column in the CSV file to use as the source.
Optional. Defaults to None.
csv_args – A dictionary of arguments to pass to the csv.DictReader.
Optional. Defaults to None.
encoding – The encoding of the CSV file. Optional. Defaults to None.
Methods
__init__(file_path[, source_column, ...])
param file_path
The path to the CSV file.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]¶
Parameters
file_path – The path to the CSV file. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
e8cd759e37c1-1 | Parameters
file_path – The path to the CSV file.
source_column – The name of the column in the CSV file to use as the source.
Optional. Defaults to None.
csv_args – A dictionary of arguments to pass to the csv.DictReader.
Optional. Defaults to None.
encoding – The encoding of the CSV file. Optional. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CSVLoader¶
ChatGPT Plugin
CSV | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
f30f54849aa7-0 | langchain.document_loaders.embaas.EmbaasLoader¶
class langchain.document_loaders.embaas.EmbaasLoader[source]¶
Bases: BaseEmbaasLoader, BaseLoader
Load from Embaas.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the Embaas document extraction API.
param blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None¶
The blob loader to use. If not provided, a default one will be created.
param embaas_api_key: Optional[str] = None¶
The API key for the Embaas document extraction API.
param file_path: str [Required]¶
The path to the file to load. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
f30f54849aa7-1 | param file_path: str [Required]¶
The path to the file to load.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the Embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
f30f54849aa7-2 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Load the documents from the file path lazily.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
f30f54849aa7-3 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbaasLoader¶
Embaas | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
c86568f13c82-0 | langchain.document_loaders.cube_semantic.CubeSemanticLoader¶
class langchain.document_loaders.cube_semantic.CubeSemanticLoader(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
Load Cube semantic layer metadata.
Parameters
cube_api_url – REST API endpoint.
Use the REST API of your Cube’s deployment.
Please find out more information here:
https://cube.dev/docs/http-api/rest#configuration-base-path
cube_api_token – Cube API token.
Authentication tokens are generated based on your Cube’s API secret.
Please find out more information here:
https://cube.dev/docs/security#generating-json-web-tokens-jwt
load_dimension_values – Whether to load dimension values for every string
dimension or not.
dimension_values_limit – Maximum number of dimension values to load.
dimension_values_max_retries – Maximum number of retries to load dimension
values.
dimension_values_retry_delay – Delay between retries to load dimension values.
Methods
__init__(cube_api_url, cube_api_token[, ...])
lazy_load()
A lazy loader for Documents.
load()
Makes a call to Cube's REST API metadata endpoint.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Makes a call to Cube’s REST API metadata endpoint. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html |
c86568f13c82-1 | Makes a call to Cube’s REST API metadata endpoint.
Returns
page_content=column_title + column_description
metadata
table_name
column_name
column_data_type
column_member_type
column_title
column_description
column_values
cube_data_obj_type
Return type
A list of documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CubeSemanticLoader¶
Cube Semantic Layer | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html |
5e2124dab716-0 | langchain.document_loaders.chatgpt.concatenate_rows¶
langchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) → str[source]¶
Combine message information in a readable format ready to be used.
:param message: Message to be concatenated
:param title: Title of the conversation
Returns
Concatenated message | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html |
d5032622828e-0 | langchain.document_loaders.college_confidential.CollegeConfidentialLoader¶
class langchain.document_loaders.college_confidential.CollegeConfidentialLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load College Confidential webpages.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpages as Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
d5032622828e-1 | scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages as Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
d5032622828e-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using CollegeConfidentialLoader¶
College Confidential | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
553c12c71043-0 | langchain.document_loaders.obs_directory.OBSDirectoryLoader¶
class langchain.document_loaders.obs_directory.OBSDirectoryLoader(bucket: str, endpoint: str, config: Optional[dict] = None, prefix: str = '')[source]¶
Load from Huawei OBS directory.
Initialize the OBSDirectoryLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
endpoint (str) – The endpoint URL of your OBS bucket.
config (dict) – The parameters for connecting to OBS, provided as a dictionary. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
prefix (str, optional) – The prefix to be added to the OBS key. Defaults to “”.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSDirectoryLoader:
```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
directory_loader = OBSDirectoryLoader(“your-bucket-name”, “your-end-endpoint”, config, “your-prefix”)
Methods | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
553c12c71043-1 | Methods
__init__(bucket, endpoint[, config, prefix])
Initialize the OBSDirectoryLoader with the specified settings.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, endpoint: str, config: Optional[dict] = None, prefix: str = '')[source]¶
Initialize the OBSDirectoryLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
endpoint (str) – The endpoint URL of your OBS bucket.
config (dict) – The parameters for connecting to OBS, provided as a dictionary. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
prefix (str, optional) – The prefix to be added to the OBS key. Defaults to “”.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSDirectoryLoader:
```
config = {
“ak”: “your-access-key”, | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
553c12c71043-2 | ```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
directory_loader = OBSDirectoryLoader(“your-bucket-name”, “your-end-endpoint”, config, “your-prefix”)
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OBSDirectoryLoader¶
Huawei OBS Directory | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
6510290ae10d-0 | langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader¶
class langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Load from Hugging Face Hub datasets.
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
Note: Currently the function assumes the content is a string.
If it is not download the dataset using huggingface library and convert
using the json or pandas loaders.
https://github.com/langchain-ai/langchain/issues/10674
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
Methods
__init__(path[, page_content_column, name, ...])
Initialize the HuggingFaceDatasetLoader.
lazy_load()
Load documents lazily.
load()
Load documents.
load_and_split([text_splitter]) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html |
6510290ae10d-1 | load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
Note: Currently the function assumes the content is a string.
If it is not download the dataset using huggingface library and convert
using the json or pandas loaders.
https://github.com/langchain-ai/langchain/issues/10674
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
lazy_load() → Iterator[Document][source]¶
Load documents lazily.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html |
6510290ae10d-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using HuggingFaceDatasetLoader¶
HuggingFace dataset | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html |
2b0360948288-0 | langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
Load TOML files.
It can load a single source file or several files in a single
directory.
Initialize the TomlLoader with a source file or directory.
Methods
__init__(source)
Initialize the TomlLoader with a source file or directory.
lazy_load()
Lazily load the TOML documents from the source file or directory.
load()
Load and return all documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(source: Union[str, Path])[source]¶
Initialize the TomlLoader with a source file or directory.
lazy_load() → Iterator[Document][source]¶
Lazily load the TOML documents from the source file or directory.
load() → List[Document][source]¶
Load and return all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TomlLoader¶
TOML | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html |
894ef93bf4e1-0 | langchain.document_loaders.parsers.grobid.GrobidParser¶
class langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]¶
Load article PDF files using Grobid.
Methods
__init__(segment_sentences[, grobid_server])
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
process_xml(file_path, xml_data, ...)
Process the XML file from Grobin.
__init__(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument') → None[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
process_xml(file_path: str, xml_data: str, segment_sentences: bool) → Iterator[Document][source]¶
Process the XML file from Grobin.
Examples using GrobidParser¶
Grobid | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html |
36c9149e31f2-0 | langchain.document_loaders.epub.UnstructuredEPubLoader¶
class langchain.document_loaders.epub.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load EPub files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredEPubLoader
loader = UnstructuredEPubLoader(“example.epub”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-epub
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html |
36c9149e31f2-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEPubLoader¶
EPub | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html |
458619fb7fea-0 | langchain.document_loaders.email.UnstructuredEmailLoader¶
class langchain.document_loaders.email.UnstructuredEmailLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load email files using Unstructured.
Works with both
.eml and .msg files. You can process attachments in addition to the
e-mail message itself by passing process_attachments=True into the
constructor for the loader. By default, attachments will be processed
with the unstructured partition function. If you already know the document
types of the attachments, you can specify another partitioning function
with the attachment partitioner kwarg.
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email.eml”, mode=”elements”)
loader.load()
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email-attachment.eml”,
mode=”elements”,
process_attachments=True,
)
loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
458619fb7fea-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEmailLoader¶
Email | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
ff678e1bc332-0 | langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
Parameters
file_path – The path to the file to detect the encoding for.
timeout – The timeout in seconds for the encoding detection. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html |
86a7d99ce028-0 | langchain.document_loaders.youtube.GoogleApiYoutubeLoader¶
class langchain.document_loaders.youtube.GoogleApiYoutubeLoader(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]¶
Load all Videos from a YouTube Channel.
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
“https://developers.google.com/docs/api/quickstart/python”
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Attributes
add_video_info
captions_language
channel_name
continue_on_failure
video_ids
google_api_client
Methods
__init__(google_api_client[, channel_name, ...])
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
validate_channel_or_videoIds_is_set(values)
Validate that either folder_id or document_ids is set, but not both. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html |
86a7d99ce028-1 | Validate that either folder_id or document_ids is set, but not both.
__init__(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False) → None¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]¶
Validate that either folder_id or document_ids is set, but not both.
Examples using GoogleApiYoutubeLoader¶
YouTube
YouTube transcripts | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html |
2e7e2c881be6-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured API.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
```python
from langchain.document_loaders import UnstructuredAPIFileLoader
loader = UnstructuredFileAPILoader(“example.pdf”, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__([file_path, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html |
2e7e2c881be6-1 | lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredAPIFileLoader¶
Unstructured File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html |
dc873e322993-0 | langchain.document_loaders.pdf.PDFMinerLoader¶
class langchain.document_loaders.pdf.PDFMinerLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Load PDF files using PDFMiner.
Initialize with file path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with file path.
lazy_load()
Lazily load documents.
load()
Eagerly load the content.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None) → None[source]¶
Initialize with file path.
lazy_load() → Iterator[Document][source]¶
Lazily load documents.
load() → List[Document][source]¶
Eagerly load the content.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerLoader.html |
c71a84c5dbf6-0 | langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader¶
class langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]¶
Load from Azure Blob Storage files.
Initialize with connection string, container and blob name.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
blob
Blob name.
Methods
__init__(conn_str, container, blob_name)
Initialize with connection string, container and blob name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conn_str: str, container: str, blob_name: str)[source]¶
Initialize with connection string, container and blob name.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AzureBlobStorageFileLoader¶
Azure Blob Storage
Azure Blob Storage File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader.html |
05f57c64bc48-0 | langchain.document_loaders.pdf.DocumentIntelligenceLoader¶
class langchain.document_loaders.pdf.DocumentIntelligenceLoader(file_path: str, client: Any, model: str = 'prebuilt-document', headers: Optional[Dict] = None)[source]¶
Loads a PDF with Azure Document Intelligence
Initialize the object for file processing with Azure Document Intelligence
(formerly Form Recognizer).
This constructor initializes a DocumentIntelligenceParser object to be used
for parsing files using the Azure Document Intelligence API. The load method
generates a Document node including metadata (source blob and page number)
for each page.
file_pathstrThe path to the file that needs to be parsed.
client: AnyA DocumentAnalysisClient to perform the analysis of the blob
modelstrThe model name or ID to be used for form recognition in Azure.
>>> obj = DocumentIntelligenceLoader(
... file_path="path/to/file",
... client=client,
... model="prebuilt-document"
... )
Attributes
source
Methods
__init__(file_path, client[, model, headers])
Initialize the object for file processing with Azure Document Intelligence (formerly Form Recognizer).
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, client: Any, model: str = 'prebuilt-document', headers: Optional[Dict] = None) → None[source]¶
Initialize the object for file processing with Azure Document Intelligence
(formerly Form Recognizer).
This constructor initializes a DocumentIntelligenceParser object to be used
for parsing files using the Azure Document Intelligence API. The load method
generates a Document node including metadata (source blob and page number)
for each page. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.DocumentIntelligenceLoader.html |
05f57c64bc48-1 | generates a Document node including metadata (source blob and page number)
for each page.
file_pathstrThe path to the file that needs to be parsed.
client: AnyA DocumentAnalysisClient to perform the analysis of the blob
modelstrThe model name or ID to be used for form recognition in Azure.
>>> obj = DocumentIntelligenceLoader(
... file_path="path/to/file",
... client=client,
... model="prebuilt-document"
... )
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DocumentIntelligenceLoader¶
Azure Document Intelligence | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.DocumentIntelligenceLoader.html |
d9781d563811-0 | langchain.document_loaders.etherscan.EtherscanLoader¶
class langchain.document_loaders.etherscan.EtherscanLoader(account_address: str, api_key: str = 'docs-demo', filter: str = 'normal_transaction', page: int = 1, offset: int = 10, start_block: int = 0, end_block: int = 99999999, sort: str = 'desc')[source]¶
Load transactions from Ethereum mainnet.
The Loader use Etherscan API to interact with Ethereum mainnet.
ETHERSCAN_API_KEY environment variable must be set use this loader.
Methods
__init__(account_address[, api_key, filter, ...])
getERC1155Tx()
getERC20Tx()
getERC721Tx()
getEthBalance()
getInternalTx()
getNormTx()
lazy_load()
Lazy load Documents from table.
load()
Load transactions from spcifc account by Etherscan.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(account_address: str, api_key: str = 'docs-demo', filter: str = 'normal_transaction', page: int = 1, offset: int = 10, start_block: int = 0, end_block: int = 99999999, sort: str = 'desc')[source]¶
getERC1155Tx() → List[Document][source]¶
getERC20Tx() → List[Document][source]¶
getERC721Tx() → List[Document][source]¶
getEthBalance() → List[Document][source]¶
getInternalTx() → List[Document][source]¶
getNormTx() → List[Document][source]¶
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.etherscan.EtherscanLoader.html |
d9781d563811-1 | lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table.
load() → List[Document][source]¶
Load transactions from spcifc account by Etherscan.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using EtherscanLoader¶
Etherscan Loader | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.etherscan.EtherscanLoader.html |
8aee69212e0d-0 | langchain.document_loaders.datadog_logs.DatadogLogsLoader¶
class langchain.document_loaders.datadog_logs.DatadogLogsLoader(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100)[source]¶
Load Datadog logs.
Logs are written into the page_content and into the metadata.
Initialize Datadog document loader.
Requirements:
Must have datadog_api_client installed. Install with pip install datadog_api_client.
Parameters
query – The query to run in Datadog.
api_key – The Datadog API key.
app_key – The Datadog APP key.
from_time – Optional. The start of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to 20 minutes ago.
to_time – Optional. The end of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to now.
limit – The maximum number of logs to return.
Defaults to 100.
Methods
__init__(query, api_key, app_key[, ...])
Initialize Datadog document loader.
lazy_load()
A lazy loader for Documents.
load()
Get logs from Datadog.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_log(log)
Create Document objects from Datadog log items.
__init__(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100) → None[source]¶
Initialize Datadog document loader. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
8aee69212e0d-1 | Initialize Datadog document loader.
Requirements:
Must have datadog_api_client installed. Install with pip install datadog_api_client.
Parameters
query – The query to run in Datadog.
api_key – The Datadog API key.
app_key – The Datadog APP key.
from_time – Optional. The start of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to 20 minutes ago.
to_time – Optional. The end of the time range to query.
Supports date math and regular timestamps (milliseconds) like ‘1688732708951’
Defaults to now.
limit – The maximum number of logs to return.
Defaults to 100.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Get logs from Datadog.
Returns
A list of Document objects.
page_content
metadata
id
service
status
tags
timestamp
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_log(log: dict) → Document[source]¶
Create Document objects from Datadog log items.
Examples using DatadogLogsLoader¶
Datadog Logs | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
333d7f633c8a-0 | langchain.document_loaders.unstructured.UnstructuredBaseLoader¶
class langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', post_processors: Optional[List[Callable]] = None, **unstructured_kwargs: Any)[source]¶
Base Loader that uses Unstructured.
Initialize with file path.
Methods
__init__([mode, post_processors])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(mode: str = 'single', post_processors: Optional[List[Callable]] = None, **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html |
4ff1dc43e0f9-0 | langchain.document_loaders.python.PythonLoader¶
class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶
Load Python files, respecting any non-default encoding if specified.
Initialize with a file path.
Parameters
file_path – The path to the file to load.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html |
5129d69c242b-0 | langchain.document_loaders.acreom.AcreomLoader¶
class langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Load acreom vault from a directory.
Initialize the loader.
Attributes
FRONT_MATTER_REGEX
Regex to match front matter metadata in markdown files.
file_path
Path to the directory containing the markdown files.
encoding
Encoding to use when reading the files.
collect_metadata
Whether to collect metadata from the front matter.
Methods
__init__(path[, encoding, collect_metadata])
Initialize the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Initialize the loader.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AcreomLoader¶
acreom | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html |
39540c2403ac-0 | langchain.document_loaders.html_bs.BSHTMLLoader¶
class langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Load HTML files and parse them with beautiful soup.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when calling get_text on the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load HTML document into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '') → None[source]¶
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when calling get_text on the soup.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load HTML document into document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
39540c2403ac-1 | load() → List[Document][source]¶
Load HTML document into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
9249d4c99b1a-0 | langchain.document_loaders.unstructured.get_elements_from_api¶
langchain.document_loaders.unstructured.get_elements_from_api(file_path: Optional[Union[str, List[str]]] = None, file: Optional[Union[IO, Sequence[IO]]] = None, api_url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any) → List[source]¶
Retrieve a list of elements from the Unstructured API. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.get_elements_from_api.html |
8421e74c922f-0 | langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob[source]¶
Bases: BaseModel
Blob represents raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param data: Optional[Union[bytes, str]] = None¶
param encoding: str = 'utf-8'¶
param mimetype: Optional[str] = None¶
param path: Optional[Union[str, pathlib.PurePath]] = None¶
as_bytes() → bytes[source]¶
Read data as bytes.
as_bytes_io() → Generator[Union[BytesIO, BufferedReader], None, None][source]¶
Read data as a byte stream.
as_string() → str[source]¶
Read data as a string.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
8421e74c922f-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) → Blob[source]¶
Initialize the blob from in-memory data.
Parameters
data – the in-memory data associated with the blob
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
path – if provided, will be set as the source from which the data came
Returns
Blob instance
classmethod from_orm(obj: Any) → Model¶
classmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) → Blob[source]¶
Load the blob from a path like object.
Parameters
path – path like object to file to be read | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
8421e74c922f-2 | Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
8421e74c922f-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
Examples using Blob¶
docai.md
Embaas | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
c5591a01804c-0 | langchain.document_loaders.confluence.ContentFormat¶
class langchain.document_loaders.confluence.ContentFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the content formats of Confluence page.
EDITOR = 'body.editor'¶
EXPORT_VIEW = 'body.export_view'¶
ANONYMOUS_EXPORT_VIEW = 'body.anonymous_export_view'¶
STORAGE = 'body.storage'¶
VIEW = 'body.view'¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
3e1cb8e27463-0 | langchain.document_loaders.iugu.IuguLoader¶
class langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]¶
Load from IUGU.
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
Methods
__init__(resource[, api_token])
Initialize the IUGU resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, api_token: Optional[str] = None) → None[source]¶
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using IuguLoader¶
Iugu | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html |
a063464acb64-0 | langchain.document_loaders.parsers.txt.TextParser¶
class langchain.document_loaders.parsers.txt.TextParser[source]¶
Parser for text blobs.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.txt.TextParser.html |
047dfe38d8ec-0 | langchain.document_loaders.chromium.AsyncChromiumLoader¶
class langchain.document_loaders.chromium.AsyncChromiumLoader(urls: List[str])[source]¶
Scrape HTML pages from URLs using a
headless instance of the Chromium.
Initialize the loader with a list of URL paths.
Parameters
urls (List[str]) – A list of URLs to scrape content from.
Raises
ImportError – If the required ‘playwright’ package is not installed.
Methods
__init__(urls)
Initialize the loader with a list of URL paths.
ascrape_playwright(url)
Asynchronously scrape the content of a given URL using Playwright's async API.
lazy_load()
Lazily load text content from the provided URLs.
load()
Load and return all Documents from the provided URLs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str])[source]¶
Initialize the loader with a list of URL paths.
Parameters
urls (List[str]) – A list of URLs to scrape content from.
Raises
ImportError – If the required ‘playwright’ package is not installed.
async ascrape_playwright(url: str) → str[source]¶
Asynchronously scrape the content of a given URL using Playwright’s async API.
Parameters
url (str) – The URL to scrape.
Returns
The scraped HTML content or an error message if an exception occurs.
Return type
str
lazy_load() → Iterator[Document][source]¶
Lazily load text content from the provided URLs.
This method yields Documents one at a time as they’re scraped,
instead of waiting to scrape all URLs before returning.
Yields
Document – The scraped content encapsulated within a Document object.
load() → List[Document][source]¶
Load and return all Documents from the provided URLs. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chromium.AsyncChromiumLoader.html |
047dfe38d8ec-1 | Load and return all Documents from the provided URLs.
Returns
A list of Document objects
containing the scraped content from each URL.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AsyncChromiumLoader¶
Beautiful Soup
Async Chromium
Set env var OPENAI_API_KEY or load from a .env file: | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chromium.AsyncChromiumLoader.html |
81fa0850877c-0 | langchain.document_loaders.pdf.AmazonTextractPDFLoader¶
class langchain.document_loaders.pdf.AmazonTextractPDFLoader(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None, headers: Optional[Dict] = None)[source]¶
Load PDF files from a local file system, HTTP or S3.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Amazon Textract service.
Example
Initialize the loader.
Parameters
file_path – A file, url or s3 path for input file
textract_features – Features to be used for extraction, each feature
should be passed as a str that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client (Optional)
credentials_profile_name – AWS profile name, if not default (Optional)
region_name – AWS region, eg us-east-1 (Optional)
endpoint_url – endpoint url for the textract service (Optional)
Attributes
source
Methods
__init__(file_path[, textract_features, ...])
Initialize the loader.
lazy_load()
Lazy load documents
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html |
81fa0850877c-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None, headers: Optional[Dict] = None) → None[source]¶
Initialize the loader.
Parameters
file_path – A file, url or s3 path for input file
textract_features – Features to be used for extraction, each feature
should be passed as a str that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client (Optional)
credentials_profile_name – AWS profile name, if not default (Optional)
region_name – AWS region, eg us-east-1 (Optional)
endpoint_url – endpoint url for the textract service (Optional)
lazy_load() → Iterator[Document][source]¶
Lazy load documents
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AmazonTextractPDFLoader¶
Amazon Textract | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html |
f6c0abe53350-0 | langchain.document_loaders.markdown.UnstructuredMarkdownLoader¶
class langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Markdown files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredMarkdownLoader
loader = UnstructuredMarkdownLoader(“example.md”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-md
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html |
f6c0abe53350-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredMarkdownLoader¶
StarRocks | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html |
d34f861be2ff-0 | langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader¶
class langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(urls: List[str], save_dir: str)[source]¶
Load YouTube urls as audio file(s).
Methods
__init__(urls, save_dir)
yield_blobs()
Yield audio blobs for each url.
__init__(urls: List[str], save_dir: str)[source]¶
yield_blobs() → Iterable[Blob][source]¶
Yield audio blobs for each url.
Examples using YoutubeAudioLoader¶
Loading documents from a YouTube url | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader.html |
3b7279cdc5dc-0 | langchain.document_loaders.arcgis_loader.ArcGISLoader¶
class langchain.document_loaders.arcgis_loader.ArcGISLoader(layer: Union[str, arcgis.features.FeatureLayer], gis: Optional[arcgis.gis.GIS] = None, where: str = '1=1', out_fields: Optional[Union[List[str], str]] = None, return_geometry: bool = False, return_all_records: bool = True, lyr_desc: Optional[str] = None, **kwargs: Any)[source]¶
Load records from an ArcGIS FeatureLayer.
Methods
__init__(layer[, gis, where, out_fields, ...])
lazy_load()
Lazy load records from FeatureLayer.
load()
Load all records from FeatureLayer.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(layer: Union[str, arcgis.features.FeatureLayer], gis: Optional[arcgis.gis.GIS] = None, where: str = '1=1', out_fields: Optional[Union[List[str], str]] = None, return_geometry: bool = False, return_all_records: bool = True, lyr_desc: Optional[str] = None, **kwargs: Any)[source]¶
lazy_load() → Iterator[Document][source]¶
Lazy load records from FeatureLayer.
load() → List[Document][source]¶
Load all records from FeatureLayer.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ArcGISLoader¶
ArcGIS | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arcgis_loader.ArcGISLoader.html |
94d435edc940-0 | langchain.document_loaders.evernote.EverNoteLoader¶
class langchain.document_loaders.evernote.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]¶
Load from EverNote.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. ‘author’, ‘created’, ‘updated’ etc.
but not ‘content-raw’ or ‘resource’) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) – The path to the notebook export with a .enex extension
load_single_document (bool) – Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) – the ‘source’ which contains the file name of the export.
Initialize with file path.
Methods
__init__(file_path[, load_single_document])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents from EverNote export file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, load_single_document: bool = True)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents from EverNote export file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html |
94d435edc940-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using EverNoteLoader¶
EverNote | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html |
152e2ee80097-0 | langchain.document_loaders.image_captions.ImageCaptionLoader¶
class langchain.document_loaders.image_captions.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Load image captions.
By default, the loader utilizes the pre-trained
Salesforce BLIP image captioning model.
https://huggingface.co/Salesforce/blip-image-captioning-base
Initialize with a list of image paths
Parameters
path_images – A list of image paths.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
Methods
__init__(path_images[, blip_processor, ...])
Initialize with a list of image paths
lazy_load()
A lazy loader for Documents.
load()
Load from a list of image files
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Initialize with a list of image paths
Parameters
path_images – A list of image paths.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a list of image files
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html |
152e2ee80097-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ImageCaptionLoader¶
Image captions | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html |
82fcb39fdea6-0 | langchain.document_loaders.xorbits.XorbitsLoader¶
class langchain.document_loaders.xorbits.XorbitsLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Xorbits DataFrame.
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document]¶
Lazy load records from dataframe.
load() → List[Document]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using XorbitsLoader¶
Xorbits Pandas DataFrame | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xorbits.XorbitsLoader.html |
83de56b7d77a-0 | langchain.document_loaders.rocksetdb.ColumnNotFoundError¶
class langchain.document_loaders.rocksetdb.ColumnNotFoundError(missing_key: str, query: str)[source]¶
Column not found error. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.ColumnNotFoundError.html |
4b6c9d9718ca-0 | langchain.document_loaders.pubmed.PubMedLoader¶
class langchain.document_loaders.pubmed.PubMedLoader(query: str, load_max_docs: Optional[int] = 3)[source]¶
Load from the PubMed biomedical library.
query¶
The query to be passed to the PubMed API.
load_max_docs¶
The maximum number of documents to load.
Initialize the PubMedLoader.
Parameters
query – The query to be passed to the PubMed API.
load_max_docs – The maximum number of documents to load.
Defaults to 3.
Methods
__init__(query[, load_max_docs])
Initialize the PubMedLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, load_max_docs: Optional[int] = 3)[source]¶
Initialize the PubMedLoader.
Parameters
query – The query to be passed to the PubMed API.
load_max_docs – The maximum number of documents to load.
Defaults to 3.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PubMedLoader¶
PubMed | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pubmed.PubMedLoader.html |
ac38bbc3ed67-0 | langchain.document_loaders.parsers.registry.get_parser¶
langchain.document_loaders.parsers.registry.get_parser(parser_name: str) → BaseBlobParser[source]¶
Get a parser by parser name. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html |
28f9275da9f6-0 | langchain.document_loaders.image.UnstructuredImageLoader¶
class langchain.document_loaders.image.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load PNG and JPG files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredImageLoader
loader = UnstructuredImageLoader(“example.png”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-image
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html |
28f9275da9f6-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredImageLoader¶
Images | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html |
58a5931291b7-0 | langchain.document_loaders.mongodb.MongodbLoader¶
class langchain.document_loaders.mongodb.MongodbLoader(connection_string: str, db_name: str, collection_name: str, *, filter_criteria: Optional[Dict] = None)[source]¶
Load MongoDB documents.
Methods
__init__(connection_string, db_name, ...[, ...])
aload()
Load data into Document objects.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(connection_string: str, db_name: str, collection_name: str, *, filter_criteria: Optional[Dict] = None) → None[source]¶
async aload() → List[Document][source]¶
Load data into Document objects.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
Attention:
This implementation starts an asyncio event loop which
will only work if running in a sync env. In an async env, it should
fail since there is already an event loop running.
This code should be updated to kick off the event loop from a separate
thread if running within an async context.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mongodb.MongodbLoader.html |
adeb6231e0e6-0 | langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Parse PDF using PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
Methods
__init__([text_kwargs])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(text_kwargs: Optional[Mapping[str, Any]] = None) → None[source]¶
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html |
dbf6d2088133-0 | langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Load WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WhatsAppChatLoader¶
WhatsApp
WhatsApp Chat | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html |
1c6576f49b8a-0 | langchain.document_loaders.parsers.generic.MimeTypeBasedParser¶
class langchain.document_loaders.parsers.generic.MimeTypeBasedParser(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None)[source]¶
Parser that uses mime-types to parse a blob.
This parser is useful for simple pipelines where the mime-type is sufficient
to determine how to parse a blob.
To use, configure handlers based on mime-types and pass them to the initializer.
Example
from langchain.document_loaders.parsers.generic import MimeTypeBasedParser
parser = MimeTypeBasedParser(
handlers={“application/pdf”: …,
},
fallback_parser=…,
)
Define a parser that uses mime-types to determine how to parse a blob.
Parameters
handlers – A mapping from mime-types to functions that take a blob, parse it
and return a document.
fallback_parser – A fallback_parser parser to use if the mime-type is not
found in the handlers. If provided, this parser will be
used to parse blobs with all mime-types not found in
the handlers.
If not provided, a ValueError will be raised if the
mime-type is not found in the handlers.
Methods
__init__(handlers, *[, fallback_parser])
Define a parser that uses mime-types to determine how to parse a blob.
lazy_parse(blob)
Load documents from a blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None) → None[source]¶
Define a parser that uses mime-types to determine how to parse a blob.
Parameters
handlers – A mapping from mime-types to functions that take a blob, parse it
and return a document. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html |
1c6576f49b8a-1 | and return a document.
fallback_parser – A fallback_parser parser to use if the mime-type is not
found in the handlers. If provided, this parser will be
used to parse blobs with all mime-types not found in
the handlers.
If not provided, a ValueError will be raised if the
mime-type is not found in the handlers.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load documents from a blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html |