Callstack Reviewer supports specifying model, token and endpoint for all LLM calls.
Model can be specified either at the top level - which overrides models in all modules - or on per module basis.Example configuration:
Copy
pr_review: # specify model globally for all modules model: model: "gpt-4o" token: <model token> base_url: <endpoint base url> # specify model for specific module modules: bug_hunter: model: model: "claude-3-5-sonnet-20241022" token: <model token> base_url: <endpoint base url>
We highly recommend keeping the default model configuration for the best performance.