Class LLMChain<T, Model>

Deprecated

This class will be removed in 0.3.0. Use the LangChain Expression Language (LCEL) instead. See the example below for how to use LCEL with the LLMChain class:

Chain to run queries against LLMs.

Example

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";

const prompt = ChatPromptTemplate.fromTemplate("Tell me a {adjective} joke");
const llm = new ChatOpenAI();
const chain = prompt.pipe(llm);

const response = await chain.invoke({ adjective: "funny" });

Type Parameters

  • T extends string | object = string

  • Model extends LLMType = LLMType

Hierarchy (view full)

Implements

Constructors

Properties

llm: Model

LLM Wrapper to use

outputKey: string = "text"

Key to use for output, defaults to text

prompt: BasePromptTemplate<any, BasePromptValueInterface, any>

Prompt object to use

llmKwargs?: CallOptionsIfAvailable<Model>

Kwargs to pass to LLM

memory?: BaseMemory
outputParser?: BaseLLMOutputParser<T>

OutputParser to use

Accessors

Methods

  • Parameters

    • inputs: ChainValues[]
    • Optional config: (RunnableConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[])[]

    Returns Promise<ChainValues[]>

    ⚠️ Deprecated ⚠️

    Use .batch() instead. Will be removed in 0.2.0.

    This feature is deprecated and will be removed in the future.

    It is not recommended for use.

    Call the chain on all inputs in the list

  • Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

    Parameters

    • values: ChainValues & CallOptionsIfAvailable<Model>
    • Optional config: BaseCallbackConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]

    Returns Promise<ChainValues>

  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optional options: RunnableConfig

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Format prompt with values and pass to LLM

    Parameters

    • values: ChainValues & CallOptionsIfAvailable<Model>

      keys to pass to prompt template

    • Optional callbackManager: CallbackManager

      CallbackManager to use

    Returns Promise<T>

    Completion from LLM.

    Example

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optional config: RunnableConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]

    Returns Promise<string>

    Deprecated

    Use .invoke() instead. Will be removed in 0.2.0.

Generated using TypeDoc