id
stringlengths 14
15
| text
stringlengths 44
2.47k
| source
stringlengths 61
181
|
---|---|---|
43ef13d6a5de-0 | langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html |
bfdea50d3d49-0 | langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Load notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
Methods
__init__([access_token, port, host])
param access_token
The access token to use.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost') → None[source]¶
Parameters
access_token – The access token to use.
port – The port where the Web Clipper service is running. Default is 41184.
host – The host where the Web Clipper service is running.
Default is localhost.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
bfdea50d3d49-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using JoplinLoader¶
Joplin | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
95e20f33dfb6-0 | langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]¶
Load PDF using pypdf and chunk at character level.
Methods
__init__([password])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(password: Optional[Union[str, bytes]] = None)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html |
920b66bc62b4-0 | langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser¶
class langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser(client: Any, model: str)[source]¶
Loads a PDF with Azure Document Intelligence
(formerly Forms Recognizer) and chunks at character level.
Methods
__init__(client, model)
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(client: Any, model: str)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser.html |
d1d065d98f99-0 | langchain.document_loaders.github.GitHubIssuesLoader¶
class langchain.document_loaders.github.GitHubIssuesLoader[source]¶
Bases: BaseGitHubLoader
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param assignee: Optional[str] = None¶
Filter on assigned user. Pass ‘none’ for no user and ‘*’ for any user.
param creator: Optional[str] = None¶
Filter on the user that created the issue.
param direction: Optional[Literal['asc', 'desc']] = None¶
The direction to sort the results by. Can be one of: ‘asc’, ‘desc’.
param include_prs: bool = True¶
If True include Pull Requests in results, otherwise ignore them.
param labels: Optional[List[str]] = None¶
Label names to filter one. Example: bug,ui,@high.
param mentioned: Optional[str] = None¶
Filter on a user that’s mentioned in the issue.
param milestone: Optional[Union[int, Literal['*', 'none']]] = None¶
If integer is passed, it should be a milestone’s number field.
If the string ‘*’ is passed, issues with any milestone are accepted.
If the string ‘none’ is passed, issues without milestones are returned.
param repo: str [Required]¶
Name of repository
param since: Optional[str] = None¶
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d1d065d98f99-1 | param sort: Optional[Literal['created', 'updated', 'comments']] = None¶
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
param state: Optional[Literal['open', 'closed', 'all']] = None¶
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d1d065d98f99-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load() → List[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d1d065d98f99-3 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_issue(issue: dict) → Document[source]¶
Create Document objects from a list of GitHub issues.
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property headers: Dict[str, str]¶
property query_params: str¶
Create query parameters for GitHub API.
property url: str¶
Create URL for GitHub API.
Examples using GitHubIssuesLoader¶
GitHub | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
96ff100ec0de-0 | langchain.document_loaders.rss.RSSFeedLoader¶
class langchain.document_loaders.rss.RSSFeedLoader(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any)[source]¶
Load news articles from RSS feeds using Unstructured.
Parameters
urls – URLs for RSS feeds to load. Each articles in the feed is loaded into its own document.
opml – OPML file to load feed urls from. Only one of urls or opml should be provided. The value
string (can be a URL) –
string. (or OPML markup contents as byte or) –
continue_on_failure – If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar – If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, pip install tqdm.
**newsloader_kwargs – Any additional named arguments to pass to
NewsURLLoader.
Example
from langchain.document_loaders import RSSFeedLoader
loader = RSSFeedLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
The loader uses feedparser to parse RSS feeds. The feedparser library is not installed by default so you should
install it if using this loader:
https://pythonhosted.org/feedparser/
If you use OPML, you should also install listparser:
https://pythonhosted.org/listparser/
Finally, newspaper is used to process each article:
https://newspaper.readthedocs.io/en/latest/
Initialize with urls or OPML.
Methods
__init__([urls, opml, continue_on_failure, ...])
Initialize with urls or OPML.
lazy_load() | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
96ff100ec0de-1 | Initialize with urls or OPML.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any) → None[source]¶
Initialize with urls or OPML.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RSSFeedLoader¶
RSS Feeds | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
8b2e788cb6fc-0 | langchain.document_loaders.news.NewsURLLoader¶
class langchain.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶
Load news articles from URLs using Unstructured.
Parameters
urls – URLs to load. Each is loaded into its own document.
text_mode – If True, extract text from URL and use that for page content.
Otherwise, extract raw HTML.
nlp – If True, perform NLP on the extracted contents, like providing a summary
and extracting keywords.
continue_on_failure – If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar – If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, pip install tqdm.
**newspaper_kwargs – Any additional named arguments to pass to
newspaper.Article().
Example
from langchain.document_loaders import NewsURLLoader
loader = NewsURLLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
Newspaper reference:https://newspaper.readthedocs.io/en/latest/
Initialize with file path.
Methods
__init__(urls[, text_mode, nlp, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
8b2e788cb6fc-1 | Initialize with file path.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NewsURLLoader¶
News URL | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
fcf318d0f5e4-0 | langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser¶
class langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser(textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None)[source]¶
Send PDF files to Amazon Textract and parse them.
For parsing multi-page PDFs, they have to reside on S3.
Initializes the parser.
Parameters
textract_features – Features to be used for extraction, each feature
should be passed as an int that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client
Methods
__init__([textract_features, client])
Initializes the parser.
lazy_parse(blob)
Iterates over the Blob pages and returns an Iterator with a Document for each page, like the other parsers If multi-page document, blob.path has to be set to the S3 URI and for single page docs the blob.data is taken
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None) → None[source]¶
Initializes the parser.
Parameters
textract_features – Features to be used for extraction, each feature
should be passed as an int that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Iterates over the Blob pages and returns an Iterator with a Document
for each page, like the other parsers If multi-page document, blob.path
has to be set to the S3 URI and for single page docs the blob.data is taken
parse(blob: Blob) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
fcf318d0f5e4-1 | parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
b140d872b03c-0 | langchain.document_loaders.fauna.FaunaLoader¶
class langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
Load from FaunaDB.
query¶
The FQL query string to execute.
Type
str
page_content_field¶
The field that contains the content of each page.
Type
str
secret¶
The secret key for authenticating to FaunaDB.
Type
str
metadata_fields¶
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
Methods
__init__(query, page_content_field, secret)
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FaunaLoader¶
Fauna | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html |
6f3743fa6d62-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Transcribe and parse audio files with OpenAI Whisper model.
Audio transcription with OpenAI Whisper model locally from transformers.
Parameters:
device - device to use
NOTE: By default uses the gpu if available,
if you want to use cpu, please set device = “cpu”
lang_model - whisper model to use, for example “openai/whisper-medium”
forced_decoder_ids - id states for decoder in multilanguage model,
usage example:
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained(“openai/whisper-medium”)
forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,
task=”transcribe”)
forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,
task=”translate”)
Initialize the parser.
Parameters
device – device to use.
lang_model – whisper model to use, for example “openai/whisper-medium”.
Defaults to None.
forced_decoder_ids – id states for decoder in a multilanguage model.
Defaults to None.
Methods
__init__([device, lang_model, ...])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Initialize the parser.
Parameters
device – device to use. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
6f3743fa6d62-1 | Initialize the parser.
Parameters
device – device to use.
lang_model – whisper model to use, for example “openai/whisper-medium”.
Defaults to None.
forced_decoder_ids – id states for decoder in a multilanguage model.
Defaults to None.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
d76fbebe4124-0 | langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser independent of how the blob was originally loaded.
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
abstract lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document][source]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html |
31af7c8ba374-0 | langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader¶
class langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Load from TensorFlow Dataset.
dataset_name¶
the name of the dataset to load
split_name¶
the name of the split to load.
load_max_docs¶
a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function¶
a function that converts a dataset sample
into a Document
Example
from langchain.document_loaders import TensorflowDatasetLoader
def mlqaen_example_to_document(example: dict) -> Document:
return Document(
page_content=decode_to_str(example["context"]),
metadata={
"id": decode_to_str(example["id"]),
"title": decode_to_str(example["title"]),
"question": decode_to_str(example["question"]),
"answer": decode_to_str(example["answers"]["text"][0]),
},
)
tsds_client = TensorflowDatasetLoader(
dataset_name="mlqa/en",
split_name="test",
load_max_docs=100,
sample_to_document_function=mlqaen_example_to_document,
)
Initialize the TensorflowDatasetLoader.
Parameters
dataset_name – the name of the dataset to load
split_name – the name of the split to load.
load_max_docs – a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function – a function that converts a dataset sample
into a Document.
Attributes
load_max_docs
The maximum number of documents to load.
sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
31af7c8ba374-1 | sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods
__init__(dataset_name, split_name[, ...])
Initialize the TensorflowDatasetLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Initialize the TensorflowDatasetLoader.
Parameters
dataset_name – the name of the dataset to load
split_name – the name of the split to load.
load_max_docs – a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function – a function that converts a dataset sample
into a Document.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TensorflowDatasetLoader¶
TensorFlow Datasets | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
e6b20484b744-0 | langchain.document_loaders.pdf.PyPDFLoader¶
class langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None, headers: Optional[Dict] = None)[source]¶
Load PDF using `pypdf and chunks at character level.
Loader also stores page numbers in metadata.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path[, password, headers])
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, password: Optional[Union[str, bytes]] = None, headers: Optional[Dict] = None) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PyPDFLoader¶
Document Comparison
Google Cloud Storage File
MergeDocLoader
QA using Activeloop’s DeepLake | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html |
2e29783af51d-0 | langchain.document_loaders.pdf.PyPDFDirectoryLoader¶
class langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]¶
Load a directory with PDF files using pypdf and chunks at character level.
Loader also stores page numbers in metadata.
Methods
__init__(path[, glob, silent_errors, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html |
6f6e5deb9c79-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶
Parameters for the embaas document extraction API.
Attributes
mime_type
The mime type of the document.
file_extension
The file extension of the document.
file_name
The file name of the document.
should_chunk
Whether to chunk the document into pages.
chunk_size
The maximum size of the text chunks.
chunk_overlap
The maximum overlap allowed between chunks.
chunk_splitter
The text splitter class name for creating chunks.
separators
The separators for chunks.
should_embed
Whether to create embeddings for the document in the response.
model
The model to pass to the Embaas document extraction API.
instruction
The instruction to pass to the Embaas document extraction API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
6f6e5deb9c79-1 | update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
6f6e5deb9c79-2 | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
a077c26e1d8b-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]¶
Payload for the Embaas document extraction API.
Attributes
bytes
The base64 encoded bytes of the document to extract text from.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
a077c26e1d8b-1 | items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
88c29c9d346a-0 | langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False, continue_on_failure: bool = False, **kwargs: Any)[source]¶
Load a sitemap and its URLs.
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – list of strings or regexes that will be applied to filter the
urls that are parsed and loaded
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed.
Default: 0
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file. Default: False
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
Attributes
web_path
Methods
__init__(web_path[, filter_urls, ...])
Initialize with webpage path and optional filter URLs.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load() | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
88c29c9d346a-1 | lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False, continue_on_failure: bool = False, **kwargs: Any)[source]¶
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – list of strings or regexes that will be applied to filter the
urls that are parsed and loaded
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed.
Default: 0
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file. Default: False
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
88c29c9d346a-2 | may result in missing data. Default: False
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load sitemap.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_sitemap(soup: Any) → List[dict][source]¶
Parse sitemap xml and load into a list of dicts.
Parameters
soup – BeautifulSoup object.
Returns
List of dicts.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using SitemapLoader¶
Sitemap | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
33580b722a55-0 | langchain.document_loaders.blockchain.BlockchainDocumentLoader¶
class langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Load elements from a blockchain smart contract.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
Methods
__init__(contract_address[, blockchainType, ...])
param contract_address | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
33580b722a55-1 | __init__(contract_address[, blockchainType, ...])
param contract_address
The address of the smart contract.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BlockchainDocumentLoader¶
Blockchain | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
645e4571c06a-0 | langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load text file.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
Initialize with file path.
Methods
__init__(file_path[, encoding, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TextLoader¶
Cohere Reranker
Confident
Elasticsearch
Chat Over Documents with Vectara
Vectorstore
LanceDB
sqlite-vss
Weaviate
DashVector
ScaNN
Xata
Vectara
PGVector
Rockset
Dingo
Zilliz
SingleStoreDB
Annoy
Typesense
Atlas | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
645e4571c06a-1 | Dingo
Zilliz
SingleStoreDB
Annoy
Typesense
Atlas
Activeloop Deep Lake
Neo4j Vector Index
Tair
Chroma
Alibaba Cloud OpenSearch
StarRocks
scikit-learn
Tencent Cloud VectorDB
DocArray HnswSearch
MyScale
ClickHouse
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
BagelDB
Azure Cognitive Search
Cassandra
USearch
Milvus
Marqo
DocArray InMemorySearch
Postgres Embedding
Faiss
Epsilla
AnalyticDB
Hologres
your local model path
MongoDB Atlas
Meilisearch
Conversational Retrieval Agent
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
Graph QA
Caching
MultiVector Retriever
Parent Document Retriever
Combine agents and vector stores
Loading from LangChainHub | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
347edfc98b2b-0 | langchain.document_loaders.max_compute.MaxComputeLoader¶
class langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Load from Alibaba Cloud MaxCompute table.
Initialize Alibaba Cloud MaxCompute document loader.
Parameters
query – SQL query to execute.
api_wrapper – MaxCompute API wrapper.
page_content_columns – The columns to write into the page_content of the
Document. If unspecified, all columns will be written to page_content.
metadata_columns – The columns to write into the metadata of the Document.
If unspecified, all columns not added to page_content will be written.
Methods
__init__(query, api_wrapper, *[, ...])
Initialize Alibaba Cloud MaxCompute document loader.
from_params(query, endpoint, project, *[, ...])
Convenience constructor that builds the MaxCompute API wrapper from
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Initialize Alibaba Cloud MaxCompute document loader.
Parameters
query – SQL query to execute.
api_wrapper – MaxCompute API wrapper.
page_content_columns – The columns to write into the page_content of the
Document. If unspecified, all columns will be written to page_content.
metadata_columns – The columns to write into the metadata of the Document.
If unspecified, all columns not added to page_content will be written. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
347edfc98b2b-1 | If unspecified, all columns not added to page_content will be written.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → MaxComputeLoader[source]¶
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query – SQL query to execute.
endpoint – MaxCompute endpoint.
project – A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id – MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key – MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MaxComputeLoader¶
Alibaba Cloud MaxCompute | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
d5f318ee4f54-0 | langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader¶
class langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Load Polars DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Polars DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame, *[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Parameters
data_frame – Polars DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PolarsDataFrameLoader¶
Polars DataFrame | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader.html |
4c3bb0959f6e-0 | langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Load ReadTheDocs documentation directory.
Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
Methods
__init__(path[, encoding, errors, ...])
Initialize ReadTheDocsLoader
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Initialize ReadTheDocsLoader | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
4c3bb0959f6e-1 | Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specify how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ReadTheDocsLoader¶
ReadTheDocs Documentation | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
124f70a1a3fe-0 | langchain.document_loaders.parsers.language.language_parser.LanguageParser¶
class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Parse using the respective programming language syntax.
Each top-level function and class in the code is loaded into separate documents.
Furthermore, an extra document is generated, containing the remaining top-level code
that excludes the already segmented functions and classes.
This approach can potentially improve the accuracy of QA models over source code.
Currently, the supported languages for code parsing are Python and JavaScript.
The language used for parsing can be configured, along with the minimum number of
lines required to activate the splitting based on syntax.
Examples
from langchain.text_splitter.Language
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import LanguageParser
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py", ".js"],
parser=LanguageParser()
)
docs = loader.load()
Example instantiations to manually select the language:
.. code-block:: python
from langchain.text_splitter import Language
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py"],
parser=LanguageParser(language=Language.PYTHON)
)
Example instantiations to set number of lines threshold:
.. code-block:: python
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py"],
parser=LanguageParser(parser_threshold=200)
)
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
124f70a1a3fe-1 | Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
Methods
__init__([language, parser_threshold])
Language parser that split code using the respective language syntax.
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using LanguageParser¶
Source Code
Set env var OPENAI_API_KEY or load from a .env file | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
8862ef7f7259-0 | langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Load HTML using 2markdown API.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, api_key: str)[source]¶
Initialize with url and api key.
lazy_load() → Iterator[Document][source]¶
Lazily load the file.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ToMarkdownLoader¶
2Markdown | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html |
5826b16b763b-0 | langchain.document_loaders.onedrive_file.OneDriveFileLoader¶
class langchain.document_loaders.onedrive_file.OneDriveFileLoader[source]¶
Bases: BaseLoader, BaseModel
Load a file from Microsoft OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param file: File [Required]¶
The file to load.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
5826b16b763b-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
5826b16b763b-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
93e158d1b2dc-0 | langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Load from Gutenberg.org.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GutenbergLoader¶
Gutenberg | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html |
2053eda6b566-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured API.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredAPIFileLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileAPILoader(f, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__(file[, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load() | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
2053eda6b566-1 | Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
19361ea7ed1b-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]¶
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
Methods
__init__([api_key])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(api_key: Optional[str] = None)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using OpenAIWhisperParser¶
Loading documents from a YouTube url | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html |
9d1fc3ae284c-0 | langchain.document_loaders.parsers.docai.DocAIParsingResults¶
class langchain.document_loaders.parsers.docai.DocAIParsingResults(source_path: str, parsed_path: str)[source]¶
A dataclass to store DocAI parsing results.
Attributes
source_path
parsed_path
Methods
__init__(source_path, parsed_path)
__init__(source_path: str, parsed_path: str) → None¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParsingResults.html |
5c07866f186e-0 | langchain.document_loaders.apify_dataset.ApifyDatasetLoader¶
class langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]¶
Bases: BaseLoader, BaseModel
Load datasets from Apify web scraping, crawling, and data extraction platform.
For details, see https://docs.apify.com/platform/integrations/langchain
Example
from langchain.document_loaders import ApifyDatasetLoader
from langchain.schema import Document
loader = ApifyDatasetLoader(
dataset_id="YOUR-DATASET-ID",
dataset_mapping_function=lambda dataset_item: Document(
page_content=dataset_item["text"], metadata={"source": dataset_item["url"]}
),
)
documents = loader.load()
Initialize the loader with an Apify dataset ID and a mapping function.
Parameters
dataset_id (str) – The ID of the dataset on the Apify platform.
dataset_mapping_function (Callable) – A function that takes a single
dictionary (an Apify dataset item) and converts it to an instance
of the Document class.
param apify_client: Any = None¶
An instance of the ApifyClient class from the apify-client Python package.
param dataset_id: str [Required]¶
The ID of the dataset on the Apify platform.
param dataset_mapping_function: Callable[[Dict], langchain.schema.document.Document] [Required]¶
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
5c07866f186e-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
5c07866f186e-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
5c07866f186e-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using ApifyDatasetLoader¶
Apify
Apify Dataset | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
e6b6ff3cd687-0 | langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Load from Wikipedia.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
Methods
__init__(query[, lang, load_max_docs, ...])
Initializes a new instance of the WikipediaLoader class.
lazy_load()
A lazy loader for Documents.
load()
Loads the query result from Wikipedia into a list of Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
e6b6ff3cd687-1 | Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WikipediaLoader¶
Wikipedia
Diffbot Graph Transformer | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
ee50e72e062d-0 | langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]¶
Parse PDF with PyPDFium2.
Initialize the parser.
Methods
__init__()
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__() → None[source]¶
Initialize the parser.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html |
2431e7b167fb-0 | langchain.document_loaders.word_document.Docx2txtLoader¶
class langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]¶
Load DOCX file using docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load given path as single page.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load given path as single page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using Docx2txtLoader¶
Microsoft Word | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html |
9dd8768cade6-0 | langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Pparse HTML files using Beautiful Soup.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, features, get_text_separator])
Initialize a bs4 based HTML parser.
lazy_parse(blob)
Load HTML document into document objects.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any) → None[source]¶
Initialize a bs4 based HTML parser.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load HTML document into document objects.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html |
cb1ec02e5c38-0 | langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Load the Mastodon ‘toots’.
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100.
exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
Methods
__init__(mastodon_accounts[, number_toots, ...])
Instantiate Mastodon toots loader.
lazy_load()
A lazy loader for Documents.
load()
Load toots into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
cb1ec02e5c38-1 | exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load toots into documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MastodonTootsLoader¶
Mastodon | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
de6183041981-0 | langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
load()
Load docs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(loaders: List)[source]¶
Initialize with a list of loaders
lazy_load() → Iterator[Document][source]¶
Lazy load docs from each individual loader.
load() → List[Document][source]¶
Load docs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MergedDataLoader¶
MergeDocLoader | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html |
e9e78363c6ce-0 | langchain.document_loaders.airbyte.AirbyteHubspotLoader¶
class langchain.document_loaders.airbyte.AirbyteHubspotLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Hubspot using an Airbyte source connector.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, stream_name[, ...])
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html |
e9e78363c6ce-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteHubspotLoader¶
Airbyte Hubspot | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html |
8e452074ec00-0 | langchain.document_loaders.pdf.PDFPlumberLoader¶
class langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, headers: Optional[Dict] = None)[source]¶
Load PDF files using pdfplumber.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path[, text_kwargs, dedupe, ...])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, headers: Optional[Dict] = None) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html |
567e3ae8f5c1-0 | langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RTF files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRTFLoader
loader = UnstructuredRTFLoader(“example.rtf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rtf
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
567e3ae8f5c1-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
14bece9fb24b-0 | langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load HTML files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredHTMLLoader
loader = UnstructuredHTMLLoader(“example.html”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-html
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html |
ebdb5da3acd3-0 | langchain.document_loaders.airbyte.AirbyteCDKLoader¶
class langchain.document_loaders.airbyte.AirbyteCDKLoader(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load with an Airbyte source connector implemented using the CDK.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
source_class – The source connector class.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, source_class, stream_name)
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
source_class – The source connector class.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html |
ebdb5da3acd3-1 | state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteCDKLoader¶
Airbyte CDK | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html |
40b338d0d7e7-0 | langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Load files from remote URLs using Unstructured.
Use the unstructured partition function to detect the MIME type
and route the file to the appropriate partitioner.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredURLLoader
loader = UnstructuredURLLoader(urls=[“<url-1>”, “<url-2>”], mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(urls[, continue_on_failure, mode, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
40b338d0d7e7-1 | load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredURLLoader¶
URL | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
d4ef4f01818c-0 | langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raise an error if the Unstructured version does not exceed the
specified minimum. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html |
fe2875ce1290-0 | langchain.document_loaders.mediawikidump.MWDumpLoader¶
class langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
Load MediaWiki dump from an XML file.
Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) – XML local file path
encoding (str, optional) – Charset encoding, defaults to “utf8”
namespaces (List[int],optional) – The namespace of pages you want to parse.
See https://www.mediawiki.org/wiki/Help:Namespaces#Localisation
for a list of all common namespaces
skip_redirects (bool, optional) – TR=rue to skip pages that redirect to other pages,
False to keep them. False by default
stop_on_error (bool, optional) – False to skip over pages that cause parsing errors,
True to stop. True by default
Methods
__init__(file_path[, encoding, namespaces, ...])
lazy_load()
A lazy loader for Documents.
load()
Load from a file path.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
fe2875ce1290-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MWDumpLoader¶
MediaWikiDump | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
63413646c3b6-0 | langchain.document_loaders.browserless.BrowserlessLoader¶
class langchain.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Load webpages with Browserless /content endpoint.
Initialize with API token and the URLs to scrape
Attributes
api_token
Browserless API token.
urls
List of URLs to scrape.
Methods
__init__(api_token, urls[, text_content])
Initialize with API token and the URLs to scrape
lazy_load()
Lazy load Documents from URLs.
load()
Load Documents from URLs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Initialize with API token and the URLs to scrape
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from URLs.
load() → List[Document][source]¶
Load Documents from URLs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BrowserlessLoader¶
Browserless | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.browserless.BrowserlessLoader.html |
a3f6404aedaa-0 | langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Load from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods
__init__(access_token, resource)
Initialize with an access token and a resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: str, resource: str) → None[source]¶
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SpreedlyLoader¶
Spreedly | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html |
3a78c6bbfa29-0 | langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RST files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader(“example.rst”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rst
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
3a78c6bbfa29-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredRSTLoader¶
RST | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
63e515587574-0 | langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft PowerPoint files using Unstructured.
Works with both .ppt and .pptx files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredPowerPointLoader
loader = UnstructuredPowerPointLoader(“example.pptx”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-pptx
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
63e515587574-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredPowerPointLoader¶
Microsoft PowerPoint | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
b2a0e0e1cc7d-0 | langchain.document_loaders.parsers.language.python.PythonSegmenter¶
class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶
Code segmenter for Python.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html |
d7e11dbffa39-0 | langchain.document_loaders.telegram.text_to_docs¶
langchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) → List[Document][source]¶
Convert a string or list of strings to a list of Documents with metadata. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html |
b9c9a598834e-0 | langchain.document_loaders.telegram.TelegramChatApiLoader¶
class langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Load Telegram chat json directory dump.
Initialize with API parameters.
Parameters
chat_entity – The chat entity to fetch data from.
api_id – The API ID.
api_hash – The API hash.
username – The username.
file_path – The file path to save the data to. Defaults to
“telegram_data.json”.
Methods
__init__([chat_entity, api_id, api_hash, ...])
Initialize with API parameters.
fetch_data_from_telegram()
Fetch data from Telegram API and save it as a JSON file.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Initialize with API parameters.
Parameters
chat_entity – The chat entity to fetch data from.
api_id – The API ID.
api_hash – The API hash.
username – The username.
file_path – The file path to save the data to. Defaults to
“telegram_data.json”.
async fetch_data_from_telegram() → None[source]¶
Fetch data from Telegram API and save it as a JSON file.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
b9c9a598834e-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TelegramChatApiLoader¶
Telegram | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
bbea7b828556-0 | langchain.document_loaders.gcs_file.GCSFileLoader¶
class langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load from GCS file.
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode=”elements”))
Methods
__init__(project_name, bucket, blob[, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html |
bbea7b828556-1 | file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode=”elements”))
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSFileLoader¶
Google Cloud Storage
Google Cloud Storage File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html |
fd8504bdfd58-0 | langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Base Loader class for PDF files.
If the file is a web path, it will download it to a temporary file, use it, thenclean up the temporary file after completion.
Initialize with a file path.
Parameters
file_path – Either a local, S3 or web path to a PDF file.
headers – Headers to use for GET request to download a file from a web path.
Attributes
source
Methods
__init__(file_path, *[, headers])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Initialize with a file path.
Parameters
file_path – Either a local, S3 or web path to a PDF file.
headers – Headers to use for GET request to download a file from a web path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html |
635d02e3d80a-0 | langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader¶
class langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: Optional[int] = 10, prevent_outside: Optional[bool] = True, link_regex: Optional[Union[str, Pattern]] = None, headers: Optional[dict] = None, check_response_status: bool = False)[source]¶
Load all child links from a URL page.
Initialize with URL to crawl and any subdirectories to exclude.
:param url: The URL to crawl.
:param max_depth: The max depth of the recursive loading.
:param use_async: Whether to use asynchronous loading.
If True, this function will not be lazy, but it will still work in the
expected way, just not lazy.
Parameters
extractor – A function to extract document contents from raw html.
When extract function returns an empty string, the document is
ignored.
metadata_extractor – A function to extract metadata from raw html and the
source url (args in that order). Default extractor will attempt
to use BeautifulSoup4 to extract the title, description and language
of the page.
exclude_dirs – A list of subdirectories to exclude.
timeout – The timeout for the requests, in the unit of seconds. If None then
connection will not timeout.
prevent_outside – If True, prevent loading from urls which are not children
of the root url.
link_regex – Regex for extracting sub-links from the raw html of a web page.
check_response_status – If True, check HTTP response status and skip | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
635d02e3d80a-1 | check_response_status – If True, check HTTP response status and skip
URLs with error responses (400-599).
Methods
__init__(url[, max_depth, use_async, ...])
Initialize with URL to crawl and any subdirectories to exclude. :param url: The URL to crawl. :param max_depth: The max depth of the recursive loading. :param use_async: Whether to use asynchronous loading. If True, this function will not be lazy, but it will still work in the expected way, just not lazy. :param extractor: A function to extract document contents from raw html. When extract function returns an empty string, the document is ignored. :param metadata_extractor: A function to extract metadata from raw html and the source url (args in that order). Default extractor will attempt to use BeautifulSoup4 to extract the title, description and language of the page. :param exclude_dirs: A list of subdirectories to exclude. :param timeout: The timeout for the requests, in the unit of seconds. If None then connection will not timeout. :param prevent_outside: If True, prevent loading from urls which are not children of the root url. :param link_regex: Regex for extracting sub-links from the raw html of a web page. :param check_response_status: If True, check HTTP response status and skip URLs with error responses (400-599).
lazy_load()
Lazy load web pages.
load()
Load web pages.
load_and_split([text_splitter])
Load Documents and split into chunks. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
635d02e3d80a-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: Optional[int] = 10, prevent_outside: Optional[bool] = True, link_regex: Optional[Union[str, Pattern]] = None, headers: Optional[dict] = None, check_response_status: bool = False) → None[source]¶
Initialize with URL to crawl and any subdirectories to exclude.
:param url: The URL to crawl.
:param max_depth: The max depth of the recursive loading.
:param use_async: Whether to use asynchronous loading.
If True, this function will not be lazy, but it will still work in the
expected way, just not lazy.
Parameters
extractor – A function to extract document contents from raw html.
When extract function returns an empty string, the document is
ignored.
metadata_extractor – A function to extract metadata from raw html and the
source url (args in that order). Default extractor will attempt
to use BeautifulSoup4 to extract the title, description and language
of the page.
exclude_dirs – A list of subdirectories to exclude.
timeout – The timeout for the requests, in the unit of seconds. If None then
connection will not timeout.
prevent_outside – If True, prevent loading from urls which are not children
of the root url.
link_regex – Regex for extracting sub-links from the raw html of a web page.
check_response_status – If True, check HTTP response status and skip
URLs with error responses (400-599).
lazy_load() → Iterator[Document][source]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
635d02e3d80a-3 | lazy_load() → Iterator[Document][source]¶
Lazy load web pages.
When use_async is True, this function will not be lazy,
but it will still work in the expected way, just not lazy.
load() → List[Document][source]¶
Load web pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RecursiveUrlLoader¶
Recursive URL Loader | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
2326dac25046-0 | langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load from GCS directory.
Initialize with bucket and key name.
Parameters
project_name – The name of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
Methods
__init__(project_name, bucket[, prefix, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Initialize with bucket and key name.
Parameters
project_name – The name of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
2326dac25046-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSDirectoryLoader¶
Google Cloud Storage
Google Cloud Storage Directory | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
e3e1a3d94104-0 | langchain.document_loaders.rocksetdb.default_joiner¶
langchain.document_loaders.rocksetdb.default_joiner(docs: List[Tuple[str, Any]]) → str[source]¶
Default joiner for content columns. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.default_joiner.html |
99f58e26a813-0 | langchain.document_loaders.rocksetdb.RocksetLoader¶
class langchain.document_loaders.rocksetdb.RocksetLoader(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typing.Any]]], str] = <function default_joiner>)[source]¶
Load from a Rockset database.
To use, you should have the rockset python package installed.
Example
# This code will load 3 records from the "langchain_demo"
# collection as Documents, with the `text` column used as
# the content
from langchain.document_loaders import RocksetLoader
from rockset import RocksetClient, Regions, models
loader = RocksetLoader(
RocksetClient(Regions.usw2a1, "<api key>"),
models.QueryRequestSql(
query="select * from langchain_demo limit 3"
),
["text"]
)
)
Initialize with Rockset client.
Parameters
client – Rockset client object.
query – Rockset query object.
content_keys – The collection columns to be written into the page_content
of the Documents.
metadata_keys – The collection columns to be written into the metadata of
the Documents. By default, this is all the keys in the document.
content_columns_joiner – Method that joins content_keys and its values into a
string. It’s method that takes in a List[Tuple[str, Any]]],
representing a list of tuples of (column name, column value).
By default, this is a method that joins each column value with a new
line. This method is only relevant if there are multiple content_keys.
Methods | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
99f58e26a813-1 | line. This method is only relevant if there are multiple content_keys.
Methods
__init__(client, query, content_keys[, ...])
Initialize with Rockset client.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typing.Any]]], str] = <function default_joiner>)[source]¶
Initialize with Rockset client.
Parameters
client – Rockset client object.
query – Rockset query object.
content_keys – The collection columns to be written into the page_content
of the Documents.
metadata_keys – The collection columns to be written into the metadata of
the Documents. By default, this is all the keys in the document.
content_columns_joiner – Method that joins content_keys and its values into a
string. It’s method that takes in a List[Tuple[str, Any]]],
representing a list of tuples of (column name, column value).
By default, this is a method that joins each column value with a new
line. This method is only relevant if there are multiple content_keys.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
99f58e26a813-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RocksetLoader¶
Rockset | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
47ae88142a5c-0 | langchain.document_loaders.embaas.BaseEmbaasLoader¶
class langchain.document_loaders.embaas.BaseEmbaasLoader[source]¶
Bases: BaseModel
Base loader for Embaas document extraction API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the Embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
The API key for the Embaas document extraction API.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the Embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |