Callstack Reviewer supports specifying model, token and endpoint for all LLM calls. Model can be specified either at the top level - which overrides models in all modules - or on per module basis.
Example configuration:
We highly recommend keeping the default model configuration for the best performance.
Callstack Reviewer supports specifying model, token and endpoint for all LLM calls. Model can be specified either at the top level - which overrides models in all modules - or on per module basis.
Example configuration:
We highly recommend keeping the default model configuration for the best performance.