mlvoca.com API

What is mlvoca.com?

mlvoca.com provides a free LLM API. The API provides access to a publicly hosted /api/generate endpoint based on the Ollama API, enabling text generation through various models.

Base URL

https://mlvoca.com

Endpoint for the free LLM API:

POST /api/generate

Generates a response based on a given prompt using a specified model. It supports both streaming and single-response generation.

Available Models:

  • TinyLlama
  • DeepSeek R1 (1.5b)

Example Usage

Streaming Request:

curl -X POST https://mlvoca.com/api/generate -d '{
    "model": "tinyllama",
    "prompt": "Why is the sky blue?"
}'

Streaming Response:

{"model":"tinyllama","created_at":"2025-12-30T15:32:28.531852218Z","response":"The","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.568068974Z","response":" sky","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.605811799Z","response":" blue","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.657004904Z","response":" color","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.694936917Z","response":" is","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.746829635Z","response":" a","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.795048461Z","response":" result","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.846561791Z","response":" of","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.886800731Z","response":" natural","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.937753551Z","response":" light","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:28.977296783Z","response":" absor","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.026242174Z","response":"ption","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.063471035Z","response":" by","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.100445776Z","response":" the","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.146222519Z","response":" earth","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.183537162Z","response":"'","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.229797711Z","response":"s","done":false}
{"model":"tinyllama","created_at":"2025-12-30T15:32:29.271010173Z","response":" atmosphere","done":false}
...

Non-Streaming Request:

curl -X POST https://mlvoca.com/api/generate -d '{
  "model": "tinyllama",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

Non-Streaming Response of the free llm api:

{
  "model": "tinyllama",
  "created_at": "2025-05-09T19:34:00Z",
  "response": "The sky is blue because of Rayleigh scattering.",
  "done": true
  ...
}

Accepts the following parameters:

  • model (required) - The model name used for generation (can be "tinyllama" or "deepseek-r1:1.5b").
  • prompt (required) - The input prompt for text generation.
  • suffix - Text appended after the model response.
  • format - Specifies the response format ("json" or a JSON schema).
  • options - Additional model parameters (e.g., "temperature").
  • system - System message override.
  • template - Custom prompt template.
  • stream - If false, returns a single response instead of a stream.
  • raw - If true, bypasses formatting and applies the full prompt.

Notes on Usage

  • The endpoint of the free llm api currently works without any kind of rate limit or API key. Therefore, it can be used for free without token/call limits.
  • The hardware resources are limited. Therefore, responses might be a little slower sometimes, especially during times of high usage.
  • We encourage scientific use of this API. Researchers, educators, and students from universities or educational institutions are encouraged to utilize this resource for scientific purposes. Please reach out to mlvoca@protonmail.com if you are planning to use the API in such a way.
  • Commercial use of this api is not allowed. If you are planning on using this API for your business, please contact mlvoca@protonmail.com.

Disclaimer

The API and related services provided herein are made available "as is" and "as available" without any warranties or guarantees, express or implied. The provider of this API (hereinafter referred to as "the Host") does not assume any liability for the accuracy, reliability, completeness, or usefulness of the outputs generated by large language models (LLMs) accessed through this API.

Users acknowledge and agree that:

The Host shall not be responsible for any actions, decisions, or consequences arising from the use of this API or any outputs generated by LLMs.

The Host disclaims any liability for direct, indirect, incidental, consequential, or special damages, including but not limited to loss of data, business disruption, or reputational harm, even if advised of the possibility of such damages.

Users are solely responsible for evaluating the appropriateness, legality, and applicability of the API’s outputs in their respective contexts.

The use of this API does not establish any form of client-provider, advisor, or fiduciary relationship.

By accessing and using this API, users agree to the terms set forth in this disclaimer and waive any claims against the Host related to its usage.