Optional
criterionOptional
evaluationOptional
llmOptional
memoryOptional
skipOptional
config: (RunnableConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[])[]Use .batch() instead. Will be removed in 0.2.0.
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Call the chain on all inputs in the list
Run the core logic of this chain and add to output if desired.
Wraps _call and handles memory.
Optional
config: BaseCallbackConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]Check if the evaluation arguments are valid.
Optional
reference: stringThe reference label.
Optional
input: stringThe input string.
If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.
Evaluate the output string pairs.
Optional
callOptions: BaseLanguageModelCallOptionsOptional
config: BaseCallbackConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]A dictionary containing the preference, scores, and/or other information.
Format prompt with values and pass to LLM
keys to pass to prompt template
Optional
callbackManager: CallbackManagerCallbackManager to use
Completion from LLM.
llm.predict({ adjective: "funny" })
Static
deserializeStatic
fromLLMCreate a new instance of the PairwiseStringEvalChain.
Optional
criteria: "detail" | ConstitutionalPrinciple | { The criteria to use for evaluation.
Optional
chainOptions: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModelInterface<any, BaseLanguageModelCallOptions>>, "llm">>Options to pass to the chain.
Static
resolveOptional
criteria: "detail" | ConstitutionalPrinciple | { Static
resolveGenerated using TypeDoc
A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs, with labeled preferences.