Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
331 changes: 329 additions & 2 deletions output/openapi/elasticsearch-openapi.json

Large diffs are not rendered by default.

331 changes: 329 additions & 2 deletions output/openapi/elasticsearch-serverless-openapi.json

Large diffs are not rendered by default.

618 changes: 565 additions & 53 deletions output/schema/schema.json

Large diffs are not rendered by default.

43 changes: 43 additions & 0 deletions output/typescript/types.ts

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

44 changes: 22 additions & 22 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"transform-to-openapi": "npm run transform-to-openapi --prefix compiler --"
},
"dependencies": {
"@redocly/cli": "^1.34.5"
"@redocly/cli": "^1.34.6"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be getting changed here.

},
"version": "overlay"
}
1 change: 1 addition & 0 deletions specification/_doc_ids/table.csv
Original file line number Diff line number Diff line change
Expand Up @@ -398,6 +398,7 @@ inference-api-put-huggingface,https://www.elastic.co/docs/api/doc/elasticsearch/
inference-api-put-jinaai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-jinaai,,
inference-api-put-llama,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-llama,,
inference-api-put-mistral,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-mistral,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-mistral.html,
inference-api-put-nvidia,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-nvidia,,
inference-api-put-openai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-openai,https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-openai.html,
inference-api-put-openshift-ai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-openshift-ai,,
inference-api-put-voyageai,https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-voyageai,,
Expand Down
49 changes: 49 additions & 0 deletions specification/_json_spec/inference.put_nvidia.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
{
"inference.put_nvidia": {
"documentation": {
"url": "https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-nvidia",
"description": "Create an Nvidia inference endpoint"
},
"stability": "stable",
"visibility": "public",
"headers": {
"accept": ["application/json"],
"content_type": ["application/json"]
},
"url": {
"paths": [
{
"path": "/_inference/{task_type}/{nvidia_inference_id}",
"methods": ["PUT"],
"parts": {
"task_type": {
"type": "enum",
"description": "The task type",
"options": [
"rerank",
"text_embedding",
"completion",
"chat_completion"
Comment on lines +23 to +26
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick, but could these be in alphabetical order?

]
},
"nvidia_inference_id": {
"type": "string",
"description": "The inference ID"
}
}
}
]
},
"body": {
"description": "The inference endpoint's task and service settings",
"required": true
},
"params": {
"timeout": {
"type": "time",
"description": "Specifies the amount of time to wait for the inference endpoint to be created.",
"default": "30s"
}
}
}
}
80 changes: 80 additions & 0 deletions specification/inference/_types/CommonTypes.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1809,6 +1809,86 @@ export enum MistralServiceType {
mistral
}

export class NvidiaServiceSettings {
/**
* A valid API key for your Nvidia endpoint.
* Can be found in `API Keys` section of Nvidia account settings.
*/
api_key: string
/**
* The URL of the Nvidia model endpoint.
*/
Comment on lines +1818 to +1820
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be helpful to include the default URLs for each task type if url isn't specified?

url?: string
/**
* The name of the model to use for the inference task.
* Refer to the model's documentation for the name if needed.
* Service has been tested and confirmed to be working with the following models:
*
* * For `text_embedding` task - `nvidia/llama-3.2-nv-embedqa-1b-v2`.
* * For `completion` and `chat_completion` tasks - `microsoft/phi-3-mini-128k-instruct`.
* * For `rerank` task - `nv-rerank-qa-mistral-4b:1`.
* Service doesn't support `text_embedding` task `baai/bge-m3` and `nvidia/nvclip` models due to them not recognizing the `input_type` parameter.
*/
model_id: string
/**
* For a `text_embedding` task, the maximum number of tokens per input before chunking occurs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be "For a `text_embedding` task, the maximum number of tokens per input. Inputs exceeding this value are truncated prior to sending to the Nvidia API."

This is wrong almost everywhere in the docs; there's an issue describing some of the problems with max_input_tokens.

*/
max_input_tokens?: integer
/**
* For a `text_embedding` task, the similarity measure. One of cosine, dot_product, l2_norm.
*/
similarity?: NvidiaSimilarityType
/**
* This setting helps to minimize the number of rate limit errors returned from the Nvidia API.
* By default, the `nvidia` service sets the number of requests allowed per minute to 3000.
*/
rate_limit?: RateLimitSetting
}

export enum NvidiaTaskType {
text_embedding,
completion,
chat_completion,
rerank
Comment on lines +1849 to +1852
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, could these be in alphabetical order?

}

export enum NvidiaServiceType {
nvidia
}

export enum NvidiaSimilarityType {
cosine,
dot_product,
l2_norm
}

export class NvidiaTaskSettings {
/**
* For a `text_embedding` task, type of input sent to the Nvidia endpoint.
* Valid values are:
*
* * `ingest`: Mapped to Nvidia's `passage` value in request. Used when generating embeddings during indexing.
* * `search`: Mapped to Nvidia's `query` value in request. Used when generating embeddings during querying.
*
* IMPORTANT: If not specified `input_type` field in request to Nvidia endpoint is set as `query` by default.
*/
input_type?: NvidiaInputType
/**
* For a `text_embedding` task, the method to handle inputs longer than the maximum token length.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To help differentiate this from max_input_tokens it might be better to word it like "the method used by the Nvidia model to handle inputs longer than..."

* Valid values are:
*
* * `END`: When the input exceeds the maximum input token length, the end of the input is discarded.
* * `NONE`: When the input exceeds the maximum input token length, an error is returned.
* * `START`: When the input exceeds the maximum input token length, the start of the input is discarded.
*/
truncate?: CohereTruncateType
}

export enum NvidiaInputType {
ingest,
search
}

export class OpenAIServiceSettings {
/**
* A valid API key of your OpenAI account.
Expand Down
12 changes: 12 additions & 0 deletions specification/inference/_types/Services.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ import {
TaskTypeJinaAi,
TaskTypeLlama,
TaskTypeMistral,
TaskTypeNvidia,
TaskTypeOpenAI,
TaskTypeOpenShiftAi,
TaskTypeVoyageAI,
Expand Down Expand Up @@ -304,6 +305,17 @@ export class InferenceEndpointInfoMistral extends InferenceEndpoint {
task_type: TaskTypeMistral
}

export class InferenceEndpointInfoNvidia extends InferenceEndpoint {
/**
* The inference ID
*/
inference_id: string
/**
* The task type
*/
task_type: TaskTypeNvidia
}

export class InferenceEndpointInfoOpenAI extends InferenceEndpoint {
/**
* The inference Id
Expand Down
7 changes: 7 additions & 0 deletions specification/inference/_types/TaskType.ts
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,13 @@ export enum TaskTypeMistral {
completion
}

export enum TaskTypeNvidia {
text_embedding,
chat_completion,
completion,
rerank
Comment on lines +144 to +147
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, could these be in alphabetical order?

}

export enum TaskTypeOpenAI {
text_embedding,
chat_completion,
Expand Down
1 change: 1 addition & 0 deletions specification/inference/put/PutRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ import { TaskType } from '@inference/_types/TaskType'
* * JinaAI (`rerank`, `text_embedding`)
* * Llama (`chat_completion`, `completion`, `text_embedding`)
* * Mistral (`chat_completion`, `completion`, `text_embedding`)
* * Nvidia (`chat_completion`, `completion`, `text_embedding`, `rerank`)
* * OpenAI (`chat_completion`, `completion`, `text_embedding`)
* * OpenShift AI (`chat_completion`, `completion`, `rerank`, `text_embedding`)
* * VoyageAI (`rerank`, `text_embedding`)
Expand Down
Loading