6  Chains

Open In Colab

Complex applications will require chaining LLMs together, or with other components.

We will cover the following types of chains:

! pip install --upgrade google-cloud-aiplatform
! pip install shapely<2.0.0
! pip install langchain
! pip install pypdf
! pip install pydantic==1.10.8
! pip install chromadb==0.3.26
! pip install langchain[docarray]
! pip install typing-inspect==0.8.0 typing_extensions==4.5.0
# Automatically restart kernel after installs so that your environment can access the new packages
import IPython

app = IPython.Application.instance()
app.kernel.do_shutdown(True)

If you’re on Colab, authenticate via the following cell

from google.colab import auth
auth.authenticate_user()

7 Initialize the SDK and LLM

# Add your project id and the region
PROJECT_ID = "<..>"
REGION = "<..>"
# Utils
import time
from typing import List

# Vertex AI
import vertexai

# Langchain
import langchain
from pydantic import BaseModel

print(f"LangChain version: {langchain.__version__}")
from langchain.chat_models import ChatVertexAI
from langchain.prompts import ChatPromptTemplate
from langchain.llms import VertexAI
from langchain.chains import LLMChain
vertexai.init(project=PROJECT_ID, location=REGION)

# LLM model
llm = VertexAI(
    model_name="text-bison@001",
    max_output_tokens=256,
    # Increasing the temp
    # for more creative output
    temperature=0.9,
    top_p=0.8,
    top_k=40,
    verbose=True,
)

7.0.1 LLMChain

An LLMChain simply provides a prompt to the LLM.

prompt = ChatPromptTemplate.from_template(
    "What is the best name to describe \
    a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
product = "A saw for laminate wood"
chain.run(product)

7.0.2 Sequential chain

A sequential chain makes a series of calls to an LLM. It enables a pipeline-style workflow in which the output from one call becomes the input to the next.

The two types include:

  • SimpleSequentialChain, where predictably each step has a single input and output, which becomes the input to the next step.

  • SequentialChain, which allows for multiple inputs and outputs.

from langchain.chains import SimpleSequentialChain
from langchain.prompts import PromptTemplate
# This is an LLMChain to write a pitch for a new product
llm = VertexAI(temperature=0.7)
template = """You are an entrepreneur. Think of a ground breaking new product and write a short pitch.

Title: {title}
Entrepreneur: This is a pitch for the above product:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
pitch_chain = LLMChain(llm=llm, prompt=prompt_template)
template = """You are a panelist on Dragon's Den. Given a \
description of the product, you are to explain why you think it will \
succeed or fail in the market.

Product pitch: {pitch}
Review by Dragon's Den panelist:"""
prompt_template = PromptTemplate(input_variables=["pitch"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[pitch_chain, review_chain], verbose=True)
review = overall_chain.run("Portable iced coffee maker")

7.0.3 Router chain

A RouterChain dynamically selects the next chain to use for a given input. This feature uses the MultiPromptChain to select then answer with the best-suited prompt to the question.

from langchain.chains.router import MultiPromptChain
korean_template = """
You are an expert in korean history and culture.
Here is a question:
{input}
"""

spanish_template = """
You are an expert in spanish history and culture.
Here is a question:
{input}
"""

chinese_template = """
You are an expert in Chinese history and culture.
Here is a question:
{input}
"""
prompt_infos = [
    {
        "name": "korean",
        "description": "Good for answering questions about Korean history and culture",
        "prompt_template": korean_template,
    },
    {
        "name": "spanish",
        "description": "Good for answering questions about Spanish history and culture",
        "prompt_template": spanish_template,
    },
     {
        "name": "chinese",
        "description": "Good for answering questions about Chinese history and culture",
        "prompt_template": chinese_template,
    },
]
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate
llm = VertexAI(temperature=0)
destination_chains = {}
for p_info in prompt_infos:
    name = p_info["name"]
    prompt_template = p_info["prompt_template"]
    prompt = ChatPromptTemplate.from_template(template=prompt_template)
    chain = LLMChain(llm=llm, prompt=prompt)
    destination_chains[name] = chain

destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)
# Thanks to Deeplearning.ai for this template and for the
# Langchain short course at deeplearning.ai/short-courses/.

MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.

<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
    "destination": string \ name of the prompt to use or "DEFAULT"
    "next_inputs": string \ a potentially modified version of the original input
}}}}
```

REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.

<< CANDIDATE PROMPTS >>
{destinations}

<< INPUT >>
{{input}}

<< OUTPUT (remember to include the ```json)>>"""
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
    destinations=destinations_str
)
router_prompt = PromptTemplate(
    template=router_template,
    input_variables=["input"],
    output_parser=RouterOutputParser(),
)

router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain,
                         destination_chains=destination_chains,
                         default_chain=default_chain, verbose=True
                        )

Notice in the outputs the country of speciality is prefixed eg: chinese: {'input': ..., denoting the routing to the correct expert.

chain.run("What was the Han Dynasty?")
chain.run("What are some of typical dishes in Catalonia?")
chain.run("How would I greet a friend's parents in Korean?")
chain.run("Summarize Don Quixote in a short paragraph")

If we provide a question that is outside of our experts’ fields, the default model handles it.

chain.run("How can I fix a carburetor?")