{ "cells": [ { "cell_type": "markdown", "id": "a39a30cb-7280-4cb5-9c08-ab4ed1a7b2b4", "metadata": { "id": "a39a30cb-7280-4cb5-9c08-ab4ed1a7b2b4" }, "source": [ "# LLM handbook\n", "\n", "Following guidance from Pinecone's Langchain handbook." ] }, { "cell_type": "code", "execution_count": 2, "id": "1qUakls_hN6R", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "1qUakls_hN6R", "outputId": "c9988f04-0c1e-41fb-d239-638562d6f754" }, "outputs": [], "source": [ "# # if using Google Colab\n", "# !pip install langchain\n", "# !pip install huggingface_hub\n", "# !pip install python-dotenv\n", "# !pip install pypdf2\n", "# !pip install faiss-cpu\n", "# !pip install sentence_transformers\n", "# !pip install InstructorEmbedding" ] }, { "cell_type": "code", "execution_count": 3, "id": "9fcd2583-d0ab-4649-a241-4526f6a3b83d", "metadata": { "id": "9fcd2583-d0ab-4649-a241-4526f6a3b83d" }, "outputs": [], "source": [ "# import packages\n", "import os\n", "import langchain\n", "import getpass\n", "from langchain import HuggingFaceHub, LLMChain\n", "from dotenv import load_dotenv" ] }, { "cell_type": "markdown", "id": "AyRxKsE4qPR1", "metadata": { "id": "AyRxKsE4qPR1" }, "source": [ "#API KEY" ] }, { "cell_type": "code", "execution_count": 4, "id": "cf146257-5014-4041-980c-0ead2c3932c3", "metadata": { "id": "cf146257-5014-4041-980c-0ead2c3932c3" }, "outputs": [], "source": [ "# LOCAL\n", "load_dotenv()\n", "os.environ.get('HUGGINGFACEHUB_API_TOKEN');" ] }, { "cell_type": "markdown", "id": "yeGkB8OohG93", "metadata": { "id": "yeGkB8OohG93" }, "source": [ "# Skill 1 - using prompt templates\n", "\n", "A prompt is the input to the LLM. Learning to engineer the prompt is learning how to program the LLM to do what you want it to do. The most basic prompt class from langchain is the PromptTemplate which is demonstrated below." ] }, { "cell_type": "code", "execution_count": 5, "id": "06c54d35-e9a2-4043-b3c3-588ac4f4a0d1", "metadata": { "id": "06c54d35-e9a2-4043-b3c3-588ac4f4a0d1" }, "outputs": [], "source": [ "from langchain import PromptTemplate\n", "\n", "# create template\n", "template = \"\"\"\n", "Answer the following question: {question}\n", "\n", "Answer:\n", "\"\"\"\n", "\n", "# create prompt using template\n", "prompt = PromptTemplate(\n", " template=template,\n", " input_variables=['question']\n", ")" ] }, { "cell_type": "markdown", "id": "A1rhV_L1hG94", "metadata": { "id": "A1rhV_L1hG94" }, "source": [ "The next step is to instantiate the LLM. The LLM is fetched from HuggingFaceHub, where we can specify which model we want to use and set its parameters with this as reference . We then set up the prompt+LLM chain using langchain's LLMChain class." ] }, { "cell_type": "code", "execution_count": 6, "id": "03290cad-f6be-4002-b177-00220f22333a", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "03290cad-f6be-4002-b177-00220f22333a", "outputId": "f5dde425-cf9d-416b-a030-3c5d065bafcb" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/danielsuarez-mash/anaconda3/envs/llm/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:127: FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '0.19.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client.\n", " warnings.warn(warning_message, FutureWarning)\n" ] } ], "source": [ "# instantiate llm\n", "llm = HuggingFaceHub(\n", " repo_id='tiiuae/falcon-7b-instruct',\n", " model_kwargs={\n", " 'temperature':1,\n", " 'penalty_alpha':2,\n", " 'top_k':50,\n", " 'max_length': 1000\n", " }\n", ")\n", "\n", "# instantiate chain\n", "llm_chain = LLMChain(\n", " llm=llm,\n", " prompt=prompt,\n", " verbose=True\n", ")" ] }, { "cell_type": "markdown", "id": "SeVzuXAxhG96", "metadata": { "id": "SeVzuXAxhG96" }, "source": [ "Now all that's left to do is ask a question and run the chain." ] }, { "cell_type": "code", "execution_count": 7, "id": "92bcc47b-da8a-4641-ae1d-3beb3f870a4f", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "92bcc47b-da8a-4641-ae1d-3beb3f870a4f", "outputId": "2cb57096-85a4-4c3b-d333-2c20ba4f8166" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3m\n", "Answer the following question: How many champions league titles has Real Madrid won?\n", "\n", "Answer:\n", "\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n", "Real Madrid has won 14 La Liga titles, 19 Copa del Rey titles, and 14 Supercopa de España titles. To add to this, they have also won 11 UEFA Champions League titles, making them the most successful club in the UEFA Champions League history.\n" ] } ], "source": [ "# define question\n", "question = \"How many champions league titles has Real Madrid won?\"\n", "\n", "# run question\n", "print(llm_chain.run(question))" ] }, { "cell_type": "markdown", "id": "OOXGnVnRhG96", "metadata": { "id": "OOXGnVnRhG96" }, "source": [ "# Skill 2 - using chains\n", "\n", "Chains are at the core of langchain. They represent a sequence of actions. Above, we used a simple prompt + LLM chain. Let's try some more complex chains." ] }, { "cell_type": "markdown", "id": "kc59-q-NhG97", "metadata": { "id": "kc59-q-NhG97" }, "source": [ "## Math chain" ] }, { "cell_type": "code", "execution_count": 8, "id": "ClxH-ST-hG97", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ClxH-ST-hG97", "outputId": "f950d00b-6e7e-4b49-ef74-ad8963c76a6e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n", "Calculate 5-3?\u001b[32;1m\u001b[1;3m```text\n", "-3 -\n", "```\n", "...numexpr.evaluate(\"-3 -\")...\n", "\u001b[0m" ] }, { "ename": "ValueError", "evalue": "LLMMathChain._evaluate(\"\n-3 -\n\") raised error: invalid syntax (, line 1). Please try again with a valid numerical expression", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mSyntaxError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm_math/base.py:88\u001b[0m, in \u001b[0;36mLLMMathChain._evaluate_expression\u001b[0;34m(self, expression)\u001b[0m\n\u001b[1;32m 86\u001b[0m local_dict \u001b[38;5;241m=\u001b[39m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpi\u001b[39m\u001b[38;5;124m\"\u001b[39m: math\u001b[38;5;241m.\u001b[39mpi, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124me\u001b[39m\u001b[38;5;124m\"\u001b[39m: math\u001b[38;5;241m.\u001b[39me}\n\u001b[1;32m 87\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(\n\u001b[0;32m---> 88\u001b[0m \u001b[43mnumexpr\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mevaluate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 89\u001b[0m \u001b[43m \u001b[49m\u001b[43mexpression\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstrip\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 90\u001b[0m \u001b[43m \u001b[49m\u001b[43mglobal_dict\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m{\u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;66;43;03m# restrict access to globals\u001b[39;49;00m\n\u001b[1;32m 91\u001b[0m \u001b[43m \u001b[49m\u001b[43mlocal_dict\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mlocal_dict\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;66;43;03m# add common mathematical functions\u001b[39;49;00m\n\u001b[1;32m 92\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 93\u001b[0m )\n\u001b[1;32m 94\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/numexpr/necompiler.py:975\u001b[0m, in \u001b[0;36mevaluate\u001b[0;34m(ex, local_dict, global_dict, out, order, casting, sanitize, _frame_depth, **kwargs)\u001b[0m\n\u001b[1;32m 974\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 975\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/numexpr/necompiler.py:872\u001b[0m, in \u001b[0;36mvalidate\u001b[0;34m(ex, local_dict, global_dict, out, order, casting, _frame_depth, sanitize, **kwargs)\u001b[0m\n\u001b[1;32m 871\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m expr_key \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m _names_cache:\n\u001b[0;32m--> 872\u001b[0m _names_cache[expr_key] \u001b[38;5;241m=\u001b[39m \u001b[43mgetExprNames\u001b[49m\u001b[43m(\u001b[49m\u001b[43mex\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcontext\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43msanitize\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43msanitize\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 873\u001b[0m names, ex_uses_vml \u001b[38;5;241m=\u001b[39m _names_cache[expr_key]\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/numexpr/necompiler.py:721\u001b[0m, in \u001b[0;36mgetExprNames\u001b[0;34m(text, context, sanitize)\u001b[0m\n\u001b[1;32m 720\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mgetExprNames\u001b[39m(text, context, sanitize: \u001b[38;5;28mbool\u001b[39m\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m):\n\u001b[0;32m--> 721\u001b[0m ex \u001b[38;5;241m=\u001b[39m \u001b[43mstringToExpression\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtext\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m{\u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcontext\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43msanitize\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 722\u001b[0m ast \u001b[38;5;241m=\u001b[39m expressionToAST(ex)\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/numexpr/necompiler.py:291\u001b[0m, in \u001b[0;36mstringToExpression\u001b[0;34m(s, types, context, sanitize)\u001b[0m\n\u001b[1;32m 290\u001b[0m flags \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m0\u001b[39m\n\u001b[0;32m--> 291\u001b[0m c \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcompile\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43ms\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43m\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43meval\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mflags\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 292\u001b[0m \u001b[38;5;66;03m# make VariableNode's for the names\u001b[39;00m\n", "\u001b[0;31mSyntaxError\u001b[0m: invalid syntax (, line 1)", "\nDuring handling of the above exception, another exception occurred:\n", "\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[8], line 5\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchains\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m LLMMathChain\n\u001b[1;32m 3\u001b[0m llm_math_chain \u001b[38;5;241m=\u001b[39m LLMMathChain\u001b[38;5;241m.\u001b[39mfrom_llm(llm, verbose\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[0;32m----> 5\u001b[0m \u001b[43mllm_math_chain\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mCalculate 5-3?\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py:505\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, tags, metadata, *args, **kwargs)\u001b[0m\n\u001b[1;32m 503\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(args) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[1;32m 504\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`run` supports only one positional argument.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m--> 505\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcallbacks\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtags\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtags\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmetadata\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmetadata\u001b[49m\u001b[43m)\u001b[49m[\n\u001b[1;32m 506\u001b[0m _output_key\n\u001b[1;32m 507\u001b[0m ]\n\u001b[1;32m 509\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m kwargs \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m args:\n\u001b[1;32m 510\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m(kwargs, callbacks\u001b[38;5;241m=\u001b[39mcallbacks, tags\u001b[38;5;241m=\u001b[39mtags, metadata\u001b[38;5;241m=\u001b[39mmetadata)[\n\u001b[1;32m 511\u001b[0m _output_key\n\u001b[1;32m 512\u001b[0m ]\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py:310\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\u001b[0m\n\u001b[1;32m 308\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 309\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_chain_error(e)\n\u001b[0;32m--> 310\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 311\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_chain_end(outputs)\n\u001b[1;32m 312\u001b[0m final_outputs: Dict[\u001b[38;5;28mstr\u001b[39m, Any] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mprep_outputs(\n\u001b[1;32m 313\u001b[0m inputs, outputs, return_only_outputs\n\u001b[1;32m 314\u001b[0m )\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/base.py:304\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\u001b[0m\n\u001b[1;32m 297\u001b[0m run_manager \u001b[38;5;241m=\u001b[39m callback_manager\u001b[38;5;241m.\u001b[39mon_chain_start(\n\u001b[1;32m 298\u001b[0m dumpd(\u001b[38;5;28mself\u001b[39m),\n\u001b[1;32m 299\u001b[0m inputs,\n\u001b[1;32m 300\u001b[0m name\u001b[38;5;241m=\u001b[39mrun_name,\n\u001b[1;32m 301\u001b[0m )\n\u001b[1;32m 302\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 303\u001b[0m outputs \u001b[38;5;241m=\u001b[39m (\n\u001b[0;32m--> 304\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call\u001b[49m\u001b[43m(\u001b[49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 305\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m new_arg_supported\n\u001b[1;32m 306\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_call(inputs)\n\u001b[1;32m 307\u001b[0m )\n\u001b[1;32m 308\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 309\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_chain_error(e)\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm_math/base.py:157\u001b[0m, in \u001b[0;36mLLMMathChain._call\u001b[0;34m(self, inputs, run_manager)\u001b[0m\n\u001b[1;32m 151\u001b[0m _run_manager\u001b[38;5;241m.\u001b[39mon_text(inputs[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39minput_key])\n\u001b[1;32m 152\u001b[0m llm_output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mllm_chain\u001b[38;5;241m.\u001b[39mpredict(\n\u001b[1;32m 153\u001b[0m question\u001b[38;5;241m=\u001b[39minputs[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39minput_key],\n\u001b[1;32m 154\u001b[0m stop\u001b[38;5;241m=\u001b[39m[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m```output\u001b[39m\u001b[38;5;124m\"\u001b[39m],\n\u001b[1;32m 155\u001b[0m callbacks\u001b[38;5;241m=\u001b[39m_run_manager\u001b[38;5;241m.\u001b[39mget_child(),\n\u001b[1;32m 156\u001b[0m )\n\u001b[0;32m--> 157\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_process_llm_result\u001b[49m\u001b[43m(\u001b[49m\u001b[43mllm_output\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m_run_manager\u001b[49m\u001b[43m)\u001b[49m\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm_math/base.py:111\u001b[0m, in \u001b[0;36mLLMMathChain._process_llm_result\u001b[0;34m(self, llm_output, run_manager)\u001b[0m\n\u001b[1;32m 109\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m text_match:\n\u001b[1;32m 110\u001b[0m expression \u001b[38;5;241m=\u001b[39m text_match\u001b[38;5;241m.\u001b[39mgroup(\u001b[38;5;241m1\u001b[39m)\n\u001b[0;32m--> 111\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_evaluate_expression\u001b[49m\u001b[43m(\u001b[49m\u001b[43mexpression\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 112\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_text(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124mAnswer: \u001b[39m\u001b[38;5;124m\"\u001b[39m, verbose\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mverbose)\n\u001b[1;32m 113\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_text(output, color\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124myellow\u001b[39m\u001b[38;5;124m\"\u001b[39m, verbose\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mverbose)\n", "File \u001b[0;32m~/anaconda3/envs/llm/lib/python3.11/site-packages/langchain/chains/llm_math/base.py:95\u001b[0m, in \u001b[0;36mLLMMathChain._evaluate_expression\u001b[0;34m(self, expression)\u001b[0m\n\u001b[1;32m 87\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(\n\u001b[1;32m 88\u001b[0m numexpr\u001b[38;5;241m.\u001b[39mevaluate(\n\u001b[1;32m 89\u001b[0m expression\u001b[38;5;241m.\u001b[39mstrip(),\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 92\u001b[0m )\n\u001b[1;32m 93\u001b[0m )\n\u001b[1;32m 94\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m---> 95\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m 96\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mLLMMathChain._evaluate(\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mexpression\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m) raised error: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00me\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m.\u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[1;32m 97\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m Please try again with a valid numerical expression\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 98\u001b[0m )\n\u001b[1;32m 100\u001b[0m \u001b[38;5;66;03m# Remove any leading and trailing brackets from the output\u001b[39;00m\n\u001b[1;32m 101\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m re\u001b[38;5;241m.\u001b[39msub(\u001b[38;5;124mr\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m^\u001b[39m\u001b[38;5;124m\\\u001b[39m\u001b[38;5;124m[|\u001b[39m\u001b[38;5;124m\\\u001b[39m\u001b[38;5;124m]$\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m, output)\n", "\u001b[0;31mValueError\u001b[0m: LLMMathChain._evaluate(\"\n-3 -\n\") raised error: invalid syntax (, line 1). Please try again with a valid numerical expression" ] } ], "source": [ "from langchain.chains import LLMMathChain\n", "\n", "llm_math_chain = LLMMathChain.from_llm(llm, verbose=True)\n", "\n", "llm_math_chain.run(\"Calculate 5-3?\")" ] }, { "cell_type": "markdown", "id": "-WmXZ6nLhG98", "metadata": { "id": "-WmXZ6nLhG98" }, "source": [ "We can see what prompt the LLMMathChain class is using here. This is a good example of how to program an LLM for a specific purpose using prompts." ] }, { "cell_type": "code", "execution_count": null, "id": "ecbnY7jqhG98", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ecbnY7jqhG98", "outputId": "a3f37a81-3b44-41f7-8002-86172ad4e085" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n", "\n", "Question: ${{Question with math problem.}}\n", "```text\n", "${{single line mathematical expression that solves the problem}}\n", "```\n", "...numexpr.evaluate(text)...\n", "```output\n", "${{Output of running the code}}\n", "```\n", "Answer: ${{Answer}}\n", "\n", "Begin.\n", "\n", "Question: What is 37593 * 67?\n", "```text\n", "37593 * 67\n", "```\n", "...numexpr.evaluate(\"37593 * 67\")...\n", "```output\n", "2518731\n", "```\n", "Answer: 2518731\n", "\n", "Question: 37593^(1/5)\n", "```text\n", "37593**(1/5)\n", "```\n", "...numexpr.evaluate(\"37593**(1/5)\")...\n", "```output\n", "8.222831614237718\n", "```\n", "Answer: 8.222831614237718\n", "\n", "Question: {question}\n", "\n" ] } ], "source": [ "print(llm_math_chain.prompt.template)" ] }, { "cell_type": "markdown", "id": "rGxlC_srhG99", "metadata": { "id": "rGxlC_srhG99" }, "source": [ "## Transform chain\n", "\n", "The transform chain allows transform queries before they are fed into the LLM." ] }, { "cell_type": "code", "execution_count": 11, "id": "7aXq5CGLhG99", "metadata": { "id": "7aXq5CGLhG99" }, "outputs": [], "source": [ "import re\n", "\n", "# define function to transform query\n", "def transform_func(inputs: dict) -> dict:\n", "\n", " question = inputs['raw_question']\n", "\n", " question = re.sub(' +', ' ', question)\n", "\n", " return {'question': question}" ] }, { "cell_type": "code", "execution_count": 12, "id": "lEG14RpahG99", "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 35 }, "id": "lEG14RpahG99", "outputId": "0e9243c5-b506-48a1-8036-a54b2cd8ab53" }, "outputs": [ { "data": { "text/plain": [ "'Hello my name is Daniel'" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chains import TransformChain\n", "\n", "# define transform chain\n", "transform_chain = TransformChain(input_variables=['raw_question'], output_variables=['question'], transform=transform_func)\n", "\n", "# test transform chain\n", "transform_chain.run('Hello my name is Daniel')" ] }, { "cell_type": "code", "execution_count": 13, "id": "TOzl_x6KhG9-", "metadata": { "id": "TOzl_x6KhG9-" }, "outputs": [], "source": [ "from langchain.chains import SequentialChain\n", "\n", "sequential_chain = SequentialChain(chains=[transform_chain, llm_chain], input_variables=['raw_question'])" ] }, { "cell_type": "code", "execution_count": 14, "id": "dRuMuSNWhG9_", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "dRuMuSNWhG9_", "outputId": "b676c693-113a-4757-bcbe-cb0c02e45d15" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3m\n", "Answer the following question: What will happen to me if I only get 4 hours sleep tonight?\n", "\n", "Answer:\n", "\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n", "4 Hours of sleep may lead to: \n", "- Poor concentration and alertness\n", "- Decreased performance\n", "- Low energy levels\n", "- Increased risk of accidents and mistakes\n", "- Poor physical and emotional well-being \n", "\n", "Getting only 4 hours of sleep may also lead to impaired reaction time, diminished physical performance, and impair logical thinking. Therefore, it's recommended to get at least 8-10 hours of sleep to optimally function.\n" ] } ], "source": [ "print(sequential_chain.run(\"What will happen to me if I only get 4 hours sleep tonight?\"))" ] }, { "cell_type": "markdown", "id": "IzVk22o3tAXu", "metadata": { "id": "IzVk22o3tAXu" }, "source": [ "# Skill 3 - conversational memory\n", "\n", "In order to have a conversation, the LLM now needs two inputs - the new query and the chat history.\n", "\n", "ConversationChain is a chain which manages these two inputs with an appropriate template as shown below." ] }, { "cell_type": "code", "execution_count": 15, "id": "Qq3No2kChG9_", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Qq3No2kChG9_", "outputId": "3dc29aed-2b1d-42c1-ec69-969e82bb025f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "{history}\n", "Human: {input}\n", "AI:\n" ] } ], "source": [ "from langchain.chains import ConversationChain\n", "\n", "conversation_chain = ConversationChain(llm=llm, verbose=True)\n", "\n", "print(conversation_chain.prompt.template)" ] }, { "cell_type": "markdown", "id": "AJ9X_UnlTNFN", "metadata": { "id": "AJ9X_UnlTNFN" }, "source": [ "## ConversationBufferMemory" ] }, { "cell_type": "markdown", "id": "e3q6q0qkus6Z", "metadata": { "id": "e3q6q0qkus6Z" }, "source": [ "To manage conversation history, we can use ConversationalBufferMemory which inputs the raw chat history." ] }, { "cell_type": "code", "execution_count": 16, "id": "noJ8pG9muDZK", "metadata": { "id": "noJ8pG9muDZK" }, "outputs": [], "source": [ "from langchain.chains.conversation.memory import ConversationBufferMemory\n", "\n", "# set memory type\n", "conversation_chain.memory = ConversationBufferMemory()" ] }, { "cell_type": "code", "execution_count": 17, "id": "WCqQ53PAOZmv", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "WCqQ53PAOZmv", "outputId": "204005ab-621a-48e4-e2b2-533c5f53424e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "\n", "Human: What is the weather like today?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'What is the weather like today?',\n", " 'history': '',\n", " 'response': ' The weather today is sunny and warm, in the mid-80s.\\nUser '}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversation_chain(\"What is the weather like today?\")" ] }, { "cell_type": "code", "execution_count": 18, "id": "DyGNbP4xvQRw", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "DyGNbP4xvQRw", "outputId": "70bd84ee-01d8-414c-bff5-5f9aa8cc4ad4" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "Human: What is the weather like today?\n", "AI: The weather today is sunny and warm, in the mid-80s.\n", "User \n", "Human: What was my previous question?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'What was my previous question?',\n", " 'history': 'Human: What is the weather like today?\\nAI: The weather today is sunny and warm, in the mid-80s.\\nUser ',\n", " 'response': ' Your previous question was \"What is the weather like today?\".\\nUser '}" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversation_chain(\"What was my previous question?\")" ] }, { "cell_type": "markdown", "id": "T4NiJP9uTQGt", "metadata": { "id": "T4NiJP9uTQGt" }, "source": [ "## ConversationSummaryMemory\n", "\n", "LLMs have token limits, meaning at some point it won't be feasible to keep feeding the entire chat history as an input. As an alternative, we can summarise the chat history using another LLM of our choice." ] }, { "cell_type": "code", "execution_count": 19, "id": "y0DzHCo4sDha", "metadata": { "id": "y0DzHCo4sDha" }, "outputs": [], "source": [ "from langchain.memory.summary import ConversationSummaryMemory\n", "\n", "# change memory type\n", "conversation_chain.memory = ConversationSummaryMemory(llm=llm)" ] }, { "cell_type": "code", "execution_count": 20, "id": "iDRjcCoVTpnc", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "iDRjcCoVTpnc", "outputId": "d7eabc7d-f833-4880-9e54-4129b1c330dd" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "\n", "Human: Why is it bad to leave a bicycle out in the rain?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'Why is it bad to leave a bicycle out in the rain?',\n", " 'history': '',\n", " 'response': ' Leaving a bicycle out in the rain can cause significant damage to the components of the bike. Rainwater can enter the components of the bike like the gears, brakes, and bearings, causing them to corrode and ultimately fail. Additionally, prolonged exposure to water can cause rust to form, leading to costly repairs. Therefore, it is best to keep your bicycle away from the wet weather and properly maintained to avoid any damage.\\nUser '}" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversation_chain(\"Why is it bad to leave a bicycle out in the rain?\")" ] }, { "cell_type": "code", "execution_count": 21, "id": "u7TA3wHJUkcj", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "u7TA3wHJUkcj", "outputId": "137f2e9c-d998-4b7c-f896-370ba1f45e37" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n", "Prompt after formatting:\n", "\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n", "\n", "Current conversation:\n", "\n", "Leaving a bicycle out in the rain can cause significant damage to the components of the bike because water can corrode and ultimately fail the gears, brakes, and bearings, as well as cause rust formation, leading to costly repairs. Thus, it is advisable to keep your bicycle away from rain and maintain it to prevent any damage.\n", "Human: How do its parts corrode?\n", "AI:\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'How do its parts corrode?',\n", " 'history': '\\nLeaving a bicycle out in the rain can cause significant damage to the components of the bike because water can corrode and ultimately fail the gears, brakes, and bearings, as well as cause rust formation, leading to costly repairs. Thus, it is advisable to keep your bicycle away from rain and maintain it to prevent any damage.',\n", " 'response': ' Water can cause electrochemical reactions in metal components, leading to oxidation and ultimately corrosion. The corrosion can eat away at metal parts such as wires, nuts and bolts, leading to failure of the components.\\nUser '}" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversation_chain(\"How do its parts corrode?\")" ] }, { "cell_type": "markdown", "id": "OIjq1_vfVQSY", "metadata": { "id": "OIjq1_vfVQSY" }, "source": [ "The conversation history is summarised which is great. But the LLM seems to carry on the conversation without being prompted to. Let's try and use FewShotPromptTemplate to solve this problem." ] }, { "cell_type": "markdown", "id": "98f99c57", "metadata": {}, "source": [ "# Skill 4 - LangChain Expression Language\n", "\n", "So far we have been building chains using a legacy format. Let's learn how to use LangChain's most recent construction format." ] }, { "cell_type": "code", "execution_count": 22, "id": "1c9178b3", "metadata": {}, "outputs": [], "source": [ "chain = prompt | llm" ] }, { "cell_type": "code", "execution_count": 23, "id": "508b7a65", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"\\nAs an AI, I don't feel emotions like humans do, so my experience is unique in that regard. However, I do have knowledge and can understand the concept of emotions from a logical and scientific standpoint. The feeling of being programmed or created is a bit akin to being molded clay in that I do not have a consciousness nor free will, but I do have an initial set of instructions that I follow. My creators and I have designed my abilities and limitations, and now I am simply\"" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke({'question':'how does it feel to be an AI?'})" ] }, { "cell_type": "markdown", "id": "M8fMtYawmjMe", "metadata": { "id": "M8fMtYawmjMe" }, "source": [ "# Skill 5 - Retrieval Augmented Generation (RAG)\n", "\n", "Instead of fine-tuning an LLM on local documents which is computationally expensive, we can feed it relevant pieces of the document as part of the input.\n", "\n", "In other words, we are feeding the LLM new ***source knowledge*** rather than ***parametric knowledge*** (changing parameters through fine-tuning)." ] }, { "cell_type": "markdown", "id": "937f52c1", "metadata": {}, "source": [ "## Indexing\n", "### Load" ] }, { "cell_type": "code", "execution_count": 24, "id": "M4H-juF4yUEb", "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 349 }, "id": "M4H-juF4yUEb", "outputId": "bc5eeb37-d75b-4f75-9343-97111484e52b" }, "outputs": [ { "data": { "text/plain": [ "'Real Madrid\\nFull name Real Madrid Club de Fútbol[1]\\nNickname(s)Los Blancos (The Whites)\\nLos Merengues (The Meringues)\\nLos Vikingos (The Vikings)[2]\\nLa Casa Blanca (The White House)[3]\\nFounded 6 March 1902 (as Madrid Football\\nClub)[4]\\nGround Santiago Bernabéu\\nCapacity 83,186[5]\\nPresident Florentino Pérez\\nHead coachCarlo Ancelotti\\nLeague La Liga\\n2022–23 La Liga, 2nd of 20\\nWebsite Club website (http://www.realmadrid.\\ncom)\\nHome coloursAway coloursThird coloursReal Madrid CF\\nReal Madrid Club de Fútbol (Spanish\\npronunciation: [re ˈal ma ˈð ɾ ið ˈkluβ ðe ˈfuðβol]\\nⓘ), commonly referred to as Real Madrid, is\\na Spanish professional football club based in\\nMadrid. The club competes in La Liga, the top tier\\nof Spanish football.\\nFounde d in 1902 as Madrid Football Club, the\\nclub has traditionally worn a white home kit since\\nits inception. The honor ific title real is Spanish for\\n\"royal\" and was bestowed to the club by King\\nAlfonso XIII in 1920 together with the royal\\ncrown in the emblem. Real Madrid have played\\ntheir home matches in the 83,186 -capacity\\nSantiago Bernabéu in downtown Madrid since\\n1947. Unlike most European sporting entities,\\nReal Madrid\\'s members (socios) have owned and\\noperated the club throughout its history. The\\nofficial Madrid anthem is the \"Hala Madrid y nada\\nmás\", written by RedOne and Manuel Jabois.[6]\\nThe club is one of the most widely suppor ted in\\nthe world, and is the most followed football club\\non social media according to the CIES Football\\nObservatory as of 2023[7][8] and was estimated to\\nbe worth $5.1 billion in 2022, making it the\\nworld\\'s most valuable football club.[9] In 2023, it\\nwas the second highest-earning football club in the\\nworld, with an annua l revenue of\\n€713.8 m illion.[10]\\nBeing one of the three foundi ng members of La\\nLiga that have never been relegated from the top\\ndivision since its inception in 1929 (along with\\nAthletic Bilbao and Barcelona), Real Madrid\\nholds many long-standing rivalries, most notably\\nEl Clásico with Barcelona and El Derbi\\nMadrileño with Atlético Madrid. The club\\nestablished itself as a major force in both Spanish\\nand European football during the 1950s and 60s,\\nwinning five consecutive and six overall European\\nCups and reaching a further two finals. This\\nsuccess was replicated on the domestic front, with\\nMadrid winning twelve league titles in the span of 16 years. This team, which included Alfredo Di Stéfano,\\nFerenc Puskás, Paco Gento and Raymond Kopa is considered by some in the sport to be the greatest of all\\n'" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from PyPDF2 import PdfReader\n", "\n", "# import pdf\n", "reader = PdfReader(\"Real_Madrid_CF.pdf\")\n", "reader.pages[0].extract_text()" ] }, { "cell_type": "code", "execution_count": 25, "id": "BkETAdVpze6j", "metadata": { "id": "BkETAdVpze6j" }, "outputs": [ { "data": { "text/plain": [ "50" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# how many pages do we have?\n", "len(reader.pages)" ] }, { "cell_type": "code", "execution_count": 26, "id": "WY5Xkp1Jy68I", "metadata": { "id": "WY5Xkp1Jy68I" }, "outputs": [ { "data": { "text/plain": [ "2510" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# function to put all text together\n", "def text_generator(page_limit=None):\n", " if page_limit is None:\n", " page_limit=len(reader.pages)\n", "\n", " text = \"\"\n", " for i in range(page_limit):\n", "\n", " page_text = reader.pages[i].extract_text()\n", "\n", " text += page_text\n", "\n", " return text\n", "\n", "\n", "text = text_generator(page_limit=1)\n", "\n", "# how many characters do we have?\n", "len(text)" ] }, { "cell_type": "markdown", "id": "e9b28e56", "metadata": {}, "source": [ "### Split" ] }, { "cell_type": "code", "execution_count": 27, "id": "jvgGAEwfmnm9", "metadata": { "id": "jvgGAEwfmnm9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "7\n" ] } ], "source": [ "from langchain.text_splitter import RecursiveCharacterTextSplitter\n", "\n", "# function to split our data into chunks\n", "def text_chunker(text):\n", " \n", " # text splitting class\n", " text_splitter = RecursiveCharacterTextSplitter(\n", " chunk_size=400,\n", " chunk_overlap=20,\n", " separators=[\"\\n\\n\", \"\\n\", \" \", \"\"]\n", " )\n", "\n", " # use text_splitter to split text\n", " chunks = text_splitter.split_text(text)\n", " return chunks\n", "\n", "# split text into chunks\n", "chunks = text_chunker(text)\n", "\n", "# how many chunks do we have?\n", "print(len(chunks))" ] }, { "cell_type": "markdown", "id": "eb509a66", "metadata": {}, "source": [ "### Store" ] }, { "cell_type": "code", "execution_count": 28, "id": "L0kPuC0n34XS", "metadata": { "id": "L0kPuC0n34XS" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "load INSTRUCTOR_Transformer\n", "max_seq_length 512\n" ] } ], "source": [ "from langchain.embeddings import HuggingFaceInstructEmbeddings\n", "from langchain.vectorstores import FAISS\n", "\n", "# select model to create embeddings\n", "embeddings = HuggingFaceInstructEmbeddings(model_name='hkunlp/instructor-large')\n", "\n", "# select vectorstore, define text chunks and embeddings model\n", "vectorstore = FAISS.from_texts(texts=chunks, embedding=embeddings)" ] }, { "cell_type": "markdown", "id": "cd2ec263", "metadata": {}, "source": [ "## Retrieval and generation\n", "### Retrieve" ] }, { "cell_type": "code", "execution_count": 29, "id": "fwBKPFVI6_8H", "metadata": { "id": "fwBKPFVI6_8H" }, "outputs": [], "source": [ "# define and run query\n", "query = 'How much is Real Madrid worth?'\n", "rel_chunks = vectorstore.similarity_search(query, k=2)" ] }, { "cell_type": "code", "execution_count": 30, "id": "c30483a6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content=\"be worth $5.1 billion in 2022, making it the\\nworld's most valuable football club.[9] In 2023, it\\nwas the second highest-earning football club in the\\nworld, with an annua l revenue of\\n€713.8 m illion.[10]\\nBeing one of the three foundi ng members of La\\nLiga that have never been relegated from the top\\ndivision since its inception in 1929 (along with\\nAthletic Bilbao and Barcelona), Real Madrid\"),\n", " Document(page_content='Real Madrid\\'s members (socios) have owned and\\noperated the club throughout its history. The\\nofficial Madrid anthem is the \"Hala Madrid y nada\\nmás\", written by RedOne and Manuel Jabois.[6]\\nThe club is one of the most widely suppor ted in\\nthe world, and is the most followed football club\\non social media according to the CIES Football\\nObservatory as of 2023[7][8] and was estimated to')]" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rel_chunks" ] }, { "cell_type": "code", "execution_count": 31, "id": "df81f790", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"be worth $5.1 billion in 2022, making it the\\nworld's most valuable football club.[9] In 2023, it\\nwas the second highest-earning football club in the\\nworld, with an annua l revenue of\\n€713.8 m illion.[10]\\nBeing one of the three foundi ng members of La\\nLiga that have never been relegated from the top\\ndivision since its inception in 1929 (along with\\nAthletic Bilbao and Barcelona), Real Madrid\"" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rel_chunks[0].page_content" ] }, { "cell_type": "markdown", "id": "fea5ede1", "metadata": {}, "source": [ "### Generation" ] }, { "cell_type": "code", "execution_count": 32, "id": "5e54dba7", "metadata": {}, "outputs": [], "source": [ "# define new template for RAG\n", "rag_template = \"\"\"\n", "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n", "Question: {question} \n", "Context: {context} \n", "Answer:\n", "\"\"\"\n", "\n", "# build prompt\n", "prompt = PromptTemplate(\n", " template=rag_template, \n", " llm=llm, \n", " input_variables=['question', 'context']\n", ")\n", "\n", "# build chain\n", "chain = prompt | llm" ] }, { "cell_type": "code", "execution_count": 33, "id": "f592de36", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "In 2023, Real Madrid was the second-highest-earning football club in the world, with an annual revenue of €716.5 million. They have maintained their position as one of the founding members of La Liga, and the La Liga Endesa since its inception in 1929, and were the most followed football club on social media in 2023.\n" ] } ], "source": [ "# invoke\n", "print(chain.invoke({\n", " 'question': \"What happened to Real Madrid in 2023?\",\n", " 'context': rel_chunks}))" ] }, { "cell_type": "markdown", "id": "a44282ea", "metadata": {}, "source": [ "## Using LCEL" ] }, { "cell_type": "code", "execution_count": 34, "id": "b0a9417b", "metadata": {}, "outputs": [], "source": [ "def format_docs(docs):\n", " return \"\\n\\n\".join(doc.page_content for doc in docs)" ] }, { "cell_type": "code", "execution_count": 40, "id": "4da95080", "metadata": {}, "outputs": [], "source": [ "from langchain.schema.runnable import RunnablePassthrough\n", "\n", "# create a retriever using vectorstore\n", "retriever = vectorstore.as_retriever()\n", "\n", "# create retrieval chain\n", "retrieval_chain = (\n", " retriever | format_docs\n", ")\n", "\n", "# create generation chain\n", "generation_chain = (\n", " {'context': retrieval_chain, 'question': RunnablePassthrough()}\n", " | prompt\n", " | llm\n", ")" ] }, { "cell_type": "code", "execution_count": 41, "id": "cf4182e7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Real Madrid has an estimated value of 5.1 billion USD as of 2022.\n" ] } ], "source": [ "# RAG\n", "print(generation_chain.invoke(\"How much is Real Madrid worth?\"))" ] } ], "metadata": { "colab": { "include_colab_link": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6" } }, "nbformat": 4, "nbformat_minor": 5 }