Firebase. AI. GenerativeModel
A type that represents a remote multimodal model (like Gemini), with the ability to generate content based on various input types.
Summary
Public functions | |
|---|---|
CountTokensAsync(ModelContent content, CancellationToken cancellationToken) | Task< CountTokensResponse > Counts the number of tokens in a prompt using the model's tokenizer. |
CountTokensAsync(string text, CancellationToken cancellationToken) | Task< CountTokensResponse > Counts the number of tokens in a prompt using the model's tokenizer. |
CountTokensAsync(IEnumerable< ModelContent > content, CancellationToken cancellationToken) | Task< CountTokensResponse > Counts the number of tokens in a prompt using the model's tokenizer. |
GenerateContentAsync(ModelContent content, CancellationToken cancellationToken) | Task< GenerateContentResponse > Generates new content from input ModelContent given to the model as a prompt. |
GenerateContentAsync(string text, CancellationToken cancellationToken) | Task< GenerateContentResponse > Generates new content from input text given to the model as a prompt. |
GenerateContentAsync(IEnumerable< ModelContent > content, CancellationToken cancellationToken) | Task< GenerateContentResponse > Generates new content from input ModelContent given to the model as a prompt. |
GenerateContentStreamAsync(ModelContent content, CancellationToken cancellationToken) | IAsyncEnumerable< GenerateContentResponse > Generates new content as a stream from input ModelContent given to the model as a prompt. |
GenerateContentStreamAsync(string text, CancellationToken cancellationToken) | IAsyncEnumerable< GenerateContentResponse > Generates new content as a stream from input text given to the model as a prompt. |
GenerateContentStreamAsync(IEnumerable< ModelContent > content, CancellationToken cancellationToken) | IAsyncEnumerable< GenerateContentResponse > Generates new content as a stream from input ModelContent given to the model as a prompt. |
StartChat(params ModelContent[] history) | Creates a new chat conversation using this model with the provided history. |
StartChat(IEnumerable< ModelContent > history) | Creates a new chat conversation using this model with the provided history. |
Public functions
CountTokensAsync
Task< CountTokensResponse > CountTokensAsync( ModelContent content, CancellationToken cancellationToken )
Counts the number of tokens in a prompt using the model's tokenizer.
| Details | |||
|---|---|---|---|
| Parameters |
| ||
| Exceptions |
| ||
| Returns | The CountTokensResponse of running the model's tokenizer on the input. |
CountTokensAsync
Task< CountTokensResponse > CountTokensAsync( string text, CancellationToken cancellationToken )
Counts the number of tokens in a prompt using the model's tokenizer.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | The CountTokensResponse of running the model's tokenizer on the input. |
CountTokensAsync
Task< CountTokensResponse > CountTokensAsync( IEnumerable< ModelContent > content, CancellationToken cancellationToken )
Counts the number of tokens in a prompt using the model's tokenizer.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | The CountTokensResponse of running the model's tokenizer on the input. |
GenerateContentAsync
Task< GenerateContentResponse > GenerateContentAsync( ModelContent content, CancellationToken cancellationToken )
Generates new content from input ModelContent given to the model as a prompt.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | The generated content response from the model. |
GenerateContentAsync
Task< GenerateContentResponse > GenerateContentAsync( string text, CancellationToken cancellationToken )
Generates new content from input text given to the model as a prompt.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | The generated content response from the model. |
GenerateContentAsync
Task< GenerateContentResponse > GenerateContentAsync( IEnumerable< ModelContent > content, CancellationToken cancellationToken )
Generates new content from input ModelContent given to the model as a prompt.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | The generated content response from the model. |
GenerateContentStreamAsync
IAsyncEnumerable< GenerateContentResponse > GenerateContentStreamAsync( ModelContent content, CancellationToken cancellationToken )
Generates new content as a stream from input ModelContent given to the model as a prompt.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | A stream of generated content responses from the model. |
GenerateContentStreamAsync
IAsyncEnumerable< GenerateContentResponse > GenerateContentStreamAsync( string text, CancellationToken cancellationToken )
Generates new content as a stream from input text given to the model as a prompt.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | A stream of generated content responses from the model. |
GenerateContentStreamAsync
IAsyncEnumerable< GenerateContentResponse > GenerateContentStreamAsync( IEnumerable< ModelContent > content, CancellationToken cancellationToken )
Generates new content as a stream from input ModelContent given to the model as a prompt.
| Details | |||||
|---|---|---|---|---|---|
| Parameters |
| ||||
| Exceptions |
| ||||
| Returns | A stream of generated content responses from the model. |
StartChat
Chat StartChat( params ModelContent[] history )
Creates a new chat conversation using this model with the provided history.
| Details | |||
|---|---|---|---|
| Parameters |
|
StartChat
Chat StartChat( IEnumerable< ModelContent > history )
Creates a new chat conversation using this model with the provided history.
| Details | |||
|---|---|---|---|
| Parameters |
|