The parameters for computing tokens.
The response from the API.
Counts the number of tokens in the given contents. Multimodal input is supported for Gemini models.
The parameters for counting tokens.
The response from the API.
Deletes a tuned model by its name.
The parameters for deleting the model.
The response from the API.
Edits an image based on a prompt, list of reference images, and configuration.
The parameters for editing an image.
The response from the API.
const response = await client.models.editImage({
model: 'imagen-3.0-capability-001',
prompt: 'Generate an image containing a mug with the product logo [1] visible on the side of the mug.',
referenceImages: [subjectReferenceImage]
config: {
numberOfImages: 1,
includeRaiReason: true,
},
});
console.log(response?.generatedImages?.[0]?.image?.imageBytes);
Calculates embeddings for the given contents. Only text is supported.
The parameters for embedding contents.
The response from the API.
Makes an API request to generate content with a given model.
For the model
parameter, supported formats for Vertex AI API include:
/
separated publisher and model name, for example:
'google/gemini-2.0-flash' or 'meta/llama-3.1-405b-instruct-maas'For the model
parameter, supported formats for Gemini API include:
Some models support multimodal input and output.
The parameters for generating content.
The response from generating content.
Makes an API request to generate content with a given model and yields the response in chunks.
For the model
parameter, supported formats for Vertex AI API include:
/
separated publisher and model name, for example:
'google/gemini-2.0-flash' or 'meta/llama-3.1-405b-instruct-maas'For the model
parameter, supported formats for Gemini API include:
Some models support multimodal input and output.
The parameters for generating content with streaming response.
The response from generating content.
Generates an image based on a text description and configuration.
The parameters for generating images.
The response from the API.
Generates videos based on a text description and configuration.
The parameters for generating videos.
A Promise
const operation = await ai.models.generateVideos({
model: 'veo-2.0-generate-001',
prompt: 'A neon hologram of a cat driving at top speed',
config: {
numberOfVideos: 1
});
while (!operation.done) {
await new Promise(resolve => setTimeout(resolve, 10000));
operation = await ai.operations.getVideosOperation({operation: operation});
}
console.log(operation.response?.generatedVideos?.[0]?.video?.uri);
Fetches information about a model by name.
Optional
params: ListModelsParametersUpdates a tuned model by its name.
The parameters for updating the model.
The response from the API.
Upscales an image based on an image, upscale factor, and configuration. Only supported in Vertex AI currently.
The parameters for upscaling an image.
The response from the API.
Given a list of contents, returns a corresponding TokensInfo containing the list of tokens and list of token ids.
This method is not supported by the Gemini Developer API.