llama.LLM.__call__
Runs the instantiated LLM engine.
Parameters
- input:
<class llama.Type>
- name of the LLM engine instance - output_type:
<class llama.Type>
- the type of the output - input_type:
<class llama.Type>
(Optional) - the type of the input (also inferred by the engine withinput
, so it is optional)
Returns
output: <class 'llama.Type>
- output of the LLM, based on input
, in the type specified by output_type