Skip to main content

Introduction

Let's get started with Ollama4j.

🦙 What is Ollama?

Ollama is an advanced AI tool that allows users to easily set up and run large language models locally (in CPU and GPU modes). With Ollama, users can leverage powerful language models such as Llama 2 and even customize and create their own models.

👨‍💻 Why Ollama4j?

Ollama4j was built for the simple purpose of integrating Ollama with Java applications.

Getting Started

What you'll need

Start Ollama server

The easiest way of getting started with Ollama server is with Docker. But if you choose to run the Ollama server directly, download the distribution of your choice and follow the installation process.

With Docker

Run in CPU mode:
docker run -it -v ~/ollama:/root/.ollama -p 11434:11434 ollama/ollama
Run in GPU mode:
docker run -it --gpus=all -v ~/ollama:/root/.ollama -p 11434:11434 ollama/ollama

You can type this command into Command Prompt, Powershell, Terminal, or any other integrated terminal of your code editor.

The command runs the Ollama server locally at http://localhost:11434/.

Setup your project

Get started by creating a new Maven project on your favorite IDE.

Add the dependency to your project's pom.xml.


<dependency>
<groupId>io.github.ollama4j</groupId>
<artifactId>ollama4j</artifactId>
<version>1.0.78</version>
</dependency>

Find the latest version of the library here.

You might want to include an implementation of SL4J logger in your pom.xml file. For example,

Use slf4j-jdk14 implementation:


<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<version>2.0.9</version> <!--Replace with appropriate version-->
</dependency>

or use logback-classic implementation:


<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.3.11</version> <!--Replace with appropriate version-->
</dependency>

or use other suitable implementations.

Create a new Java class in your project and add this code.

import io.github.ollama4j.OllamaAPI;

public class OllamaAPITest {

public static void main(String[] args) {
OllamaAPI ollamaAPI = new OllamaAPI();

boolean isOllamaServerReachable = ollamaAPI.ping();

System.out.println("Is Ollama server running: " + isOllamaServerReachable);
}
}

This uses the default Ollama host as http://localhost:11434.

Specify a different Ollama host that you want to connect to.

import io.github.ollama4j.OllamaAPI;

public class OllamaAPITest {

public static void main(String[] args) {
String host = "http://localhost:11434/";

OllamaAPI ollamaAPI = new OllamaAPI(host);

ollamaAPI.setVerbose(true);

boolean isOllamaServerReachable = ollamaAPI.ping();

System.out.println("Is Ollama server running: " + isOllamaServerReachable);
}
}