Hierarchy

  • BaseModule
    • Models

Constructors

Image for: Constructors
  • Parameters

    • apiClient: ApiClient

    Returns Models

Methods

Image for: Methods
  • Given a list of contents, returns a corresponding TokensInfo containing the list of tokens and list of token ids.

    This method is not supported by the Gemini Developer API.

    Parameters

    Returns Promise<ComputeTokensResponse>

    The response from the API.

    const response = await ai.models.computeTokens({
    model: 'gemini-2.0-flash',
    contents: 'What is your name?'
    });
    console.log(response);
  • Counts the number of tokens in the given contents. Multimodal input is supported for Gemini models.

    Parameters

    Returns Promise<CountTokensResponse>

    The response from the API.

    const response = await ai.models.countTokens({
    model: 'gemini-2.0-flash',
    contents: 'The quick brown fox jumps over the lazy dog.'
    });
    console.log(response);
  • Edits an image based on a prompt, list of reference images, and configuration.

    Parameters

    Returns Promise<EditImageResponse>

    The response from the API.

    const response = await client.models.editImage({
    model: 'imagen-3.0-capability-001',
    prompt: 'Generate an image containing a mug with the product logo [1] visible on the side of the mug.',
    referenceImages: [subjectReferenceImage]
    config: {
    numberOfImages: 1,
    includeRaiReason: true,
    },
    });
    console.log(response?.generatedImages?.[0]?.image?.imageBytes);
  • Calculates embeddings for the given contents. Only text is supported.

    Parameters

    Returns Promise<EmbedContentResponse>

    The response from the API.

    const response = await ai.models.embedContent({
    model: 'text-embedding-004',
    contents: [
    'What is your name?',
    'What is your favorite color?',
    ],
    config: {
    outputDimensionality: 64,
    },
    });
    console.log(response);
  • Makes an API request to generate content with a given model.

    For the model parameter, supported formats for Vertex AI API include:

    • The Gemini model ID, for example: 'gemini-2.0-flash'
    • The full resource name starts with 'projects/', for example: 'projects/my-project-id/locations/us-central1/publishers/google/models/gemini-2.0-flash'
    • The partial resource name with 'publishers/', for example: 'publishers/google/models/gemini-2.0-flash' or 'publishers/meta/models/llama-3.1-405b-instruct-maas'
    • / separated publisher and model name, for example: 'google/gemini-2.0-flash' or 'meta/llama-3.1-405b-instruct-maas'

    For the model parameter, supported formats for Gemini API include:

    • The Gemini model ID, for example: 'gemini-2.0-flash'
    • The model name starts with 'models/', for example: 'models/gemini-2.0-flash'
    • For tuned models, the model name starts with 'tunedModels/', for example: 'tunedModels/1234567890123456789'

    Some models support multimodal input and output.

    Parameters

    Returns Promise<GenerateContentResponse>

    The response from generating content.

    const response = await ai.models.generateContent({
    model: 'gemini-2.0-flash',
    contents: 'why is the sky blue?',
    config: {
    candidateCount: 2,
    }
    });
    console.log(response);
  • Makes an API request to generate content with a given model and yields the response in chunks.

    For the model parameter, supported formats for Vertex AI API include:

    • The Gemini model ID, for example: 'gemini-2.0-flash'
    • The full resource name starts with 'projects/', for example: 'projects/my-project-id/locations/us-central1/publishers/google/models/gemini-2.0-flash'
    • The partial resource name with 'publishers/', for example: 'publishers/google/models/gemini-2.0-flash' or 'publishers/meta/models/llama-3.1-405b-instruct-maas'
    • / separated publisher and model name, for example: 'google/gemini-2.0-flash' or 'meta/llama-3.1-405b-instruct-maas'

    For the model parameter, supported formats for Gemini API include:

    • The Gemini model ID, for example: 'gemini-2.0-flash'
    • The model name starts with 'models/', for example: 'models/gemini-2.0-flash'
    • For tuned models, the model name starts with 'tunedModels/', for example: 'tunedModels/1234567890123456789'

    Some models support multimodal input and output.

    Parameters

    Returns Promise<AsyncGenerator<GenerateContentResponse, any, unknown>>

    The response from generating content.

    const response = await ai.models.generateContentStream({
    model: 'gemini-2.0-flash',
    contents: 'why is the sky blue?',
    config: {
    maxOutputTokens: 200,
    }
    });
    for await (const chunk of response) {
    console.log(chunk);
    }
  • Generates an image based on a text description and configuration.

    Parameters

    Returns Promise<GenerateImagesResponse>

    The response from the API.

    const response = await client.models.generateImages({
    model: 'imagen-3.0-generate-002',
    prompt: 'Robot holding a red skateboard',
    config: {
    numberOfImages: 1,
    includeRaiReason: true,
    },
    });
    console.log(response?.generatedImages?.[0]?.image?.imageBytes);
  • Generates videos based on a text description and configuration.

    Parameters

    Returns Promise<GenerateVideosOperation>

    A Promise which allows you to track the progress and eventually retrieve the generated videos using the operations.get method.

    const operation = await ai.models.generateVideos({
    model: 'veo-2.0-generate-001',
    prompt: 'A neon hologram of a cat driving at top speed',
    config: {
    numberOfVideos: 1
    });

    while (!operation.done) {
    await new Promise(resolve => setTimeout(resolve, 10000));
    operation = await ai.operations.getVideosOperation({operation: operation});
    }

    console.log(operation.response?.generatedVideos?.[0]?.video?.uri);
  • Updates a tuned model by its name.

    Parameters

    Returns Promise<Model>

    The response from the API.

    const response = await ai.models.update({
    model: 'tuned-model-name',
    config: {
    displayName: 'New display name',
    description: 'New description',
    },
    });
  • Upscales an image based on an image, upscale factor, and configuration. Only supported in Vertex AI currently.

    Parameters

    Returns Promise<UpscaleImageResponse>

    The response from the API.

    const response = await client.models.upscaleImage({
    model: 'imagen-3.0-generate-002',
    image: image,
    upscaleFactor: 'x2',
    config: {
    includeRaiReason: true,
    },
    });
    console.log(response?.generatedImages?.[0]?.image?.imageBytes);