base_completion
BaseCompletion Objects
class BaseCompletion()
Base class for handling completions. This class provides shared logic for creating completions,
both synchronously and asynchronously, and both streaming and non-streaming.
Attributes:
endpointstr - API endpoint for the completion request.response_classType - Class used for parsing the non-streaming response.stream_response_classType - Class used for parsing the streaming response.
create
@classmethod
def create(cls,
model,
prompt_or_messages=None,
request_timeout=600,
stream=False,
**kwargs)
Create a completion or chat completion.
Arguments:
modelstr - Model name to use for the completion.prompt_or_messagesUnion[str, List[ChatMessage]] - The prompt for Completion or a list of chat messages for ChatCompletion. If not specified, must specify eitherpromptormessagesin kwargs.request_timeoutint, optional - Request timeout in seconds. Defaults to 600.streambool, optional - Whether to use streaming or not. Defaults to False.**kwargs- Additional keyword arguments.
Returns:
Union[CompletionResponse, Generator[CompletionStreamResponse, None, None]]:
Depending on the stream argument, either returns a CompletionResponse
or a generator yielding CompletionStreamResponse.
acreate
@classmethod
def acreate(cls, model, *args, request_timeout=600, stream=False, **kwargs)
Asynchronously create a completion.
Arguments:
modelstr - Model name to use for the completion.request_timeoutint, optional - Request timeout in seconds. Defaults to 600.streambool, optional - Whether to use streaming or not. Defaults to False.**kwargs- Additional keyword arguments.
Returns:
Union[CompletionResponse, AsyncGenerator[CompletionStreamResponse, None]]:
Depending on the stream argument, either returns a CompletionResponse or an async generator yielding CompletionStreamResponse.
completion
Completion Objects
class Completion(BaseCompletion)
Class for handling text completions.
chat_completion
ChatCompletion Objects
class ChatCompletion(BaseCompletion)
Class for handling chat completions.
api
Choice Objects
class Choice(BaseModel)
A completion choice.
Attributes:
indexint - The index of the completion choice.textstr - The completion response.logprobsfloat, optional - The log probabilities of the most likely tokens.finish_reasonstr - The reason the model stopped generating tokens. This will be "stop" if the model hit a natural stop point or a provided stop sequence, or "length" if the maximum number of tokens specified in the request was reached.
CompletionResponse Objects
class CompletionResponse(BaseModel)
The response message from a /v1/completions call.
Attributes:
idstr - A unique identifier of the response.objectstr - The object type, which is always "text_completion".createdint - The Unix time in seconds when the response was generated.choicesList[Choice] - The list of generated completion choices.
CompletionResponseStreamChoice Objects
class CompletionResponseStreamChoice(BaseModel)
A streamed completion choice.
Attributes:
indexint - The index of the completion choice.textstr - The completion response.logprobsfloat, optional - The log probabilities of the most likely tokens.finish_reasonstr - The reason the model stopped generating tokens. This will be "stop" if the model hit a natural stop point or a provided stop sequence, or "length" if the maximum number of tokens specified in the request was reached.
CompletionStreamResponse Objects
class CompletionStreamResponse(BaseModel)
The streamed response message from a /v1/completions call.
Attributes:
idstr - A unique identifier of the response.objectstr - The object type, which is always "text_completion".createdint - The Unix time in seconds when the response was generated.modelstr - The model used for the chat completion.
choices (List[CompletionResponseStreamChoice]):
The list of streamed completion choices.
Model Objects
class Model(BaseModel)
A model deployed to the Fireworks platform.
Attributes:
idstr - The model name.objectstr - The object type, which is always "model".createdint - The Unix time in seconds when the model was generated.
ListModelsResponse Objects
class ListModelsResponse(BaseModel)
The response message from a /v1/models call.
Attributes:
objectstr - The object type, which is always "list".dataList[Model] - The list of models.
ChatMessage Objects
class ChatMessage(BaseModel)
A chat completion message.
Attributes:
rolestr - The role of the author of this message.contentstr - The contents of the message.
ChatCompletionResponseChoice Objects
class ChatCompletionResponseChoice(BaseModel)
A chat completion choice generated by a chat model.
Attributes:
indexint - The index of the chat completion choice.messageChatMessage - The chat completion message.finish_reasonOptional[str] - The reason the model stopped generating tokens. This will be "stop" if the model hit a natural stop point or a provided stop sequence, or "length" if the maximum number of tokens specified in the request was reached.
UsageInfo Objects
class UsageInfo(BaseModel)
Usage statistics.
Attributes:
prompt_tokensint - The number of tokens in the prompt.total_tokensint - The total number of tokens used in the request (prompt + completion).completion_tokensOptional[int] - The number of tokens in the generated completion.
ChatCompletionResponse Objects
class ChatCompletionResponse(BaseModel)
The response message from a /v1/chat/completions call.
Attributes:
idstr - A unique identifier of the response.objectstr - The object type, which is always "chat.completion".createdint - The Unix time in seconds when the response was generated.modelstr - The model used for the chat completion.choicesList[ChatCompletionResponseChoice] - The list of chat completion choices.usageUsageInfo - Usage statistics for the chat completion.
DeltaMessage Objects
class DeltaMessage(BaseModel)
A message delta.
Attributes:
rolestr - The role of the author of this message.contentstr - The contents of the chunk message.
ChatCompletionResponseStreamChoice Objects
class ChatCompletionResponseStreamChoice(BaseModel)
A streamed chat completion choice.
Attributes:
indexint - The index of the chat completion choice.deltaDeltaMessage - The message delta.finish_reasonstr - The reason the model stopped generating tokens. This will be "stop" if the model hit a natural stop point or a provided stop sequence, or "length" if the maximum number of tokens specified in the request was reached.
ChatCompletionStreamResponse Objects
class ChatCompletionStreamResponse(BaseModel)
The streamed response message from a /v1/chat/completions call.
Attributes:
idstr - A unique identifier of the response.objectstr - The object type, which is always "chat.completion".createdint - The Unix time in seconds when the response was generated.modelstr - The model used for the chat completion.
choices (List[ChatCompletionResponseStreamChoice]):
The list of streamed chat completion choices.
model
Model Objects
class Model()
list
@classmethod
def list(cls, request_timeout=60)
Returns a list of available models.
Arguments:
request_timeoutint, optional - The request timeout in seconds. Default is 60.
Returns:
ListModelsResponse- A list of available models.
log
set_console_log_level
def set_console_log_level(level: str) -> None
Controls console logging.
Arguments:
level- the minimum level that prints out to console.
Supported values: [CRITICAL, FATAL, ERROR, WARN,
WARNING, INFO, DEBUG]
error
PermissionError Objects
class PermissionError(FireworksError)
A permission denied error.
InvalidRequestError Objects
class InvalidRequestError(FireworksError)
A invalid request error.
AuthenticationError Objects
class AuthenticationError(FireworksError)
A authentication error.
RateLimitError Objects
class RateLimitError(FireworksError)
A rate limit error.
InternalServerError Objects
class InternalServerError(FireworksError)
An internal server error.
ServiceUnavailableError Objects
class ServiceUnavailableError(FireworksError)
A service unavailable error.
