Enable Local Models

To enable the use of local LLM models, set the BASE_URL in your .env file:

example
BASE_URL=http://127.0.0.1:11434/v1

Available Models

You can find the list of models available to use here.

By default, the Llava model will be used. However, you can specify a different model in your code:

import { extract } from 'documind';

const result = await extract({
  file: 'https://example.com/document.pdf',
  model: 'llama3.2-vision' 
});

console.log(result);
Local models may not always provide optimal results. We are continually working to improve performance and add support for newer models.