Generative AI
Functions
Section titled “Functions”Adds a message to a Conversation, that the language model will begin replying to. You can receive the reply one piece at a time by calling conversation_get_reply_piece(conversation c) in a loop
Parameters:
| Name | Type | Description |
|---|---|---|
| c | Conversation | The Conversation object to check |
| message | String | The user message to add to the conversation - the language model will reply to this |
Signatures:
void conversation_add_message(conversation c, const string &message)public void Conversation.AddMessage(string message);public static void SplashKit.ConversationAddMessage(Conversation c, string message);def conversation_add_message(c, message):procedure ConversationAddMessage(c: Conversation; const message: String)Returns a reply from a Conversation, without any related thoughts.
Parameters:
| Name | Type | Description |
|---|---|---|
| conv | Conversation | The Conversation to recieve the reply from |
Return Type: String
Returns: The response from the model
Signatures:
string conversation_get_reply(conversation conv)public string Conversation.GetReply();public static string SplashKit.ConversationGetReply(Conversation conv);def conversation_get_reply(conv):function ConversationGetReply(conv: Conversation): StringReturns a reply from a Conversation, with the ability to indicate if thoughts should be included.
Parameters:
| Name | Type | Description |
|---|---|---|
| conv | Conversation | The Conversation to recieve the reply from |
| with_thoughts | Boolean | A boolean to indicate if thoughts should be included in the reply |
Return Type: String
Returns: The response from the model
Signatures:
string conversation_get_reply(conversation conv, bool with_thoughts)public string Conversation.GetReply(bool withThoughts);public static string SplashKit.ConversationGetReply(Conversation conv, bool withThoughts);def conversation_get_reply_with_thoughts(conv, with_thoughts):function ConversationGetReply(conv: Conversation; withThoughts: Boolean): StringReturns a single piece of a reply (generally one word at a time) from the Conversation You can use a loop while checking Conversation Is Replying to retrieve the reply as it generates
Parameters:
| Name | Type | Description |
|---|---|---|
| c | Conversation | The Conversation object to recieve the reply from |
Return Type: String
Returns: Returns a small piece of the reply (generally 1 word or less)
Signatures:
string conversation_get_reply_piece(conversation c)public string Conversation.GetReplyPiece();public static string SplashKit.ConversationGetReplyPiece(Conversation c);def conversation_get_reply_piece(c):function ConversationGetReplyPiece(c: Conversation): StringChecks if a language model is currently generating a reply within a Conversation. If so, you can continue to receive the message with conversation_get_reply_piece(conversation c)
Parameters:
| Name | Type | Description |
|---|---|---|
| c | Conversation | The Conversation object to check |
Return Type: Boolean
Returns: Returns whether the language model is still generating a reply
Signatures:
bool conversation_is_replying(conversation c)public bool Conversation.IsReplying();public static bool SplashKit.ConversationIsReplying(Conversation c);def conversation_is_replying(c):function ConversationIsReplying(c: Conversation): BooleanChecks if a language model is currently “thinking” while generating a reply within a Conversation. You can use this to filter out the “thoughts” and display them differently (or hide them entirely)
Parameters:
| Name | Type | Description |
|---|---|---|
| c | Conversation | The Conversation object to check |
Return Type: Boolean
Returns: Returns whether the language model is currently thinking while generating a reply
Signatures:
bool conversation_is_thinking(conversation c)public bool Conversation.IsThinking();public static bool SplashKit.ConversationIsThinking(Conversation c);def conversation_is_thinking(c):function ConversationIsThinking(c: Conversation): BooleanCreates a new Conversation object, that uses the default language model. The Conversation object can have messages added to it, and responses streamed back from it via the other Conversation functions and procedures
Return Type: Conversation
Returns: Returns a new Conversation object.
Signatures:
conversation create_conversation()public static Conversation SplashKit.CreateConversation();public Conversation();def create_conversation():function CreateConversation(): ConversationCreates a new Conversation object, that uses a chosen language model. The Conversation object can have messages added to it, and responses streamed back from it via the other Conversation functions and procedures
Parameters:
| Name | Type | Description |
|---|---|---|
| model | Language Model | The language model to use |
Return Type: Conversation
Returns: Returns a new Conversation object.
Signatures:
conversation create_conversation(language_model model)public static Conversation SplashKit.CreateConversation(LanguageModel model);public Conversation(LanguageModel model);def create_conversation_with_model(model):function CreateConversation(model: LanguageModel): ConversationReleases all of the Conversation objects which have been loaded.
Signatures:
void free_all_conversations()public static void GenerativeAi.FreeAll();public static void SplashKit.FreeAllConversations();def free_all_conversations():procedure FreeAllConversations()Frees the resources associated with the Conversation object.
Parameters:
| Name | Type | Description |
|---|---|---|
| c | Conversation | The Conversation object whose resources should be released. |
Signatures:
void free_conversation(conversation c)public void Conversation.Free();public static void SplashKit.FreeConversation(Conversation c);def free_conversation(c):procedure FreeConversation(c: Conversation)Generates a reply to a textual prompt by a language model The language model will respond to the textual prompt in a chat style format. It will follow instructions and answer questions. Instruct or Thinking models are recommended. Base models likely won’t output sensible results.
Parameters:
| Name | Type | Description |
|---|---|---|
| model | Language Model | The language model to use |
| prompt | String | The prompt for the language model to reply to. |
Return Type: String
Returns: The generated reply.
Signatures:
string generate_reply(language_model model, string prompt)public static string SplashKit.GenerateReply(LanguageModel model, string prompt);def generate_reply_with_model(model, prompt):function GenerateReply(model: LanguageModel; prompt: String): StringGenerates a reply to a textual prompt by a language model The language model will respond to the textual prompt in a chat style format. It will follow instructions and answer questions. Instruct or Thinking models are recommended. Base models likely won’t output sensible results.
Parameters:
| Name | Type | Description |
|---|---|---|
| prompt | String | The prompt for the language model to reply to. |
Return Type: String
Returns: The generated reply.
Signatures:
string generate_reply(string prompt)public static string SplashKit.GenerateReply(string prompt);def generate_reply(prompt):function GenerateReply(prompt: String): StringGenerates text that continues from a prompt, with a maximum of 125 tokens. The language model will continue predicting text based on patterns in the prompt - it will not directly follow instructions or answer questions. Base models are recommended; Instruct and Thinking models may work.
Parameters:
| Name | Type | Description |
|---|---|---|
| model | Language Model | The language model to use |
| text | String | The input text for the language model to continue. |
Return Type: String
Returns: The generated reply.
Signatures:
string generate_text(language_model model, string text)public static string SplashKit.GenerateText(LanguageModel model, string text);def generate_text_with_model(model, text):function GenerateText(model: LanguageModel; text: String): StringGenerates text that continues from a prompt, with a maximum of 125 tokens. The language model will continue predicting text based on patterns in the prompt - it will not directly follow instructions or answer questions. Base models are recommended; Instruct and Thinking models may work.
Parameters:
| Name | Type | Description |
|---|---|---|
| model | Language Model | The language model to use |
| text | String | The input text for the language model to continue. |
| max_tokens | Integer | The maximum tokens used in response - determining the length of the output and the time taken. Keep this small for reasonable execution times. |
Return Type: String
Returns: The generated reply.
Signatures:
string generate_text(language_model model, string text, int max_tokens)public static string SplashKit.GenerateText(LanguageModel model, string text, int maxTokens);def generate_text_with_model_and_tokens(model, text, max_tokens):function GenerateText(model: LanguageModel; text: String; maxTokens: Integer): StringGenerates text that continues from a prompt - with default of 125 tokens. The language model will continue predicting text based on patterns in the prompt - it will not directly follow instructions or answer questions. Base models are recommended; Instruct and Thinking models may work.
Parameters:
| Name | Type | Description |
|---|---|---|
| text | String | The input text for the language model to continue. |
Return Type: String
Returns: The generated reply.
Signatures:
string generate_text(string text)public static string SplashKit.GenerateText(string text);def generate_text(text):function GenerateText(text: String): StringGenerates text that continues from a prompt. The language model will continue predicting text based on patterns in the prompt - it will not directly follow instructions or answer questions. Base models are recommended; Instruct and Thinking models may work.
Parameters:
| Name | Type | Description |
|---|---|---|
| text | String | The input text for the language model to continue. |
| max_tokens | Integer | The maximum tokens used in response - determining the length of the output and the time taken. Keep this small for reasonable execution times. |
Return Type: String
Returns: The generated reply.
Signatures:
string generate_text(string text, int max_tokens)public static string SplashKit.GenerateText(string text, int maxTokens);def generate_text_with_tokens(text, max_tokens):function GenerateText(text: String; maxTokens: Integer): StringThe Conversation type is used to refer to conversations between the user
and a language model. You can use it to send messages to the language model,
and stream responses back.
All Conversation objects are:
-
created with
create_conversation(),create_conversation(language_model model) -
and must be released using
free_conversation()(to release a specificConversationobject) orfree_all_conversation()(to release all createdConversationobjects).
Language Models: Choose between different language models to trade off speed and intelligence Each model is scaled to fit within 1~2GB and will be automatically downloaded when needed - feel free to try them out!
| Constant | Description |
|---|---|
| QWEN3_0_6B_BASE | Qwen3 0.6B Base model - small, extremely fast and good for text completion. Very limited world knowledge. |
| QWEN3_0_6B_INSTRUCT | Qwen3 0.6B Instruct model (default) - small, extremely fast and can follow simple instructions. Very limited world knowledge. |
| QWEN3_0_6B_THINKING | Qwen3 0.6B Thinking model - small, extremely fast and can follow more specific instructions, but has a short delay before starting to reply. Very limited world knowledge. |
| QWEN3_1_7B_BASE | Qwen3 1.7B Base model - decently fast and good for text completion. Limited world knowledge. |
| QWEN3_1_7B_INSTRUCT | Qwen3 1.7B Instruct model - decently fast and can follow instructions. Limited world knowledge. |
| QWEN3_1_7B_THINKING | Qwen3 1.7B Thinking model - decently fast and can follow more difficult instructions, but has a delay before starting to reply. Limited world knowledge. |
| QWEN3_4B_BASE | Qwen3 4B Base model - slower but excellent for text completion/pattern based completion |
| QWEN3_4B_INSTRUCT | Qwen3 4B Instruct model - slower but can follow complex instructions |
| QWEN3_4B_THINKING | Qwen3 4B Thinking model - slower but can follow complex and specific instructions, but has a potentially long delay before starting to reply |
| GEMMA3_270M_BASE | Gemma3 270M Base model - tiny, extremely fast, and good for text completion. Very limited world knowledge. |
| GEMMA3_270M_INSTRUCT | Gemma3 270M Instruct model - tiny, extremely fast, and good for very simple instructions. Very limited world knowledge. |
| GEMMA3_1B_BASE | Gemma3 1B Base model - fast and good for text completion. Has decent world knowledge and multilingual abilities. |
| GEMMA3_1B_INSTRUCT | Gemma3 1B Instruct model - fast and can follow instructions. Has decent world knowledge and multilingual abilities. |
| GEMMA3_4B_BASE | Gemma3 4B Base model - slower but good for text completion/pattern based completion. Has decent world knowledge and multilingual abilities. |
| GEMMA3_4B_INSTRUCT | Gemma3 4B Instruct model - slower but can follow complex instructions. Has decent world knowledge and multilingual abilities. |
| Constant | Description |
|---|---|
| LanguageModel.Qwen306BBase | Qwen3 0.6B Base model - small, extremely fast and good for text completion. Very limited world knowledge. |
| LanguageModel.Qwen306BInstruct | Qwen3 0.6B Instruct model (default) - small, extremely fast and can follow simple instructions. Very limited world knowledge. |
| LanguageModel.Qwen306BThinking | Qwen3 0.6B Thinking model - small, extremely fast and can follow more specific instructions, but has a short delay before starting to reply. Very limited world knowledge. |
| LanguageModel.Qwen317BBase | Qwen3 1.7B Base model - decently fast and good for text completion. Limited world knowledge. |
| LanguageModel.Qwen317BInstruct | Qwen3 1.7B Instruct model - decently fast and can follow instructions. Limited world knowledge. |
| LanguageModel.Qwen317BThinking | Qwen3 1.7B Thinking model - decently fast and can follow more difficult instructions, but has a delay before starting to reply. Limited world knowledge. |
| LanguageModel.Qwen34BBase | Qwen3 4B Base model - slower but excellent for text completion/pattern based completion |
| LanguageModel.Qwen34BInstruct | Qwen3 4B Instruct model - slower but can follow complex instructions |
| LanguageModel.Qwen34BThinking | Qwen3 4B Thinking model - slower but can follow complex and specific instructions, but has a potentially long delay before starting to reply |
| LanguageModel.Gemma3270mBase | Gemma3 270M Base model - tiny, extremely fast, and good for text completion. Very limited world knowledge. |
| LanguageModel.Gemma3270mInstruct | Gemma3 270M Instruct model - tiny, extremely fast, and good for very simple instructions. Very limited world knowledge. |
| LanguageModel.Gemma31BBase | Gemma3 1B Base model - fast and good for text completion. Has decent world knowledge and multilingual abilities. |
| LanguageModel.Gemma31BInstruct | Gemma3 1B Instruct model - fast and can follow instructions. Has decent world knowledge and multilingual abilities. |
| LanguageModel.Gemma34BBase | Gemma3 4B Base model - slower but good for text completion/pattern based completion. Has decent world knowledge and multilingual abilities. |
| LanguageModel.Gemma34BInstruct | Gemma3 4B Instruct model - slower but can follow complex instructions. Has decent world knowledge and multilingual abilities. |
| Constant | Description |
|---|---|
| LanguageModel.qwen3_0_6b_base | Qwen3 0.6B Base model - small, extremely fast and good for text completion. Very limited world knowledge. |
| LanguageModel.qwen3_0_6b_instruct | Qwen3 0.6B Instruct model (default) - small, extremely fast and can follow simple instructions. Very limited world knowledge. |
| LanguageModel.qwen3_0_6b_thinking | Qwen3 0.6B Thinking model - small, extremely fast and can follow more specific instructions, but has a short delay before starting to reply. Very limited world knowledge. |
| LanguageModel.qwen3_1_7b_base | Qwen3 1.7B Base model - decently fast and good for text completion. Limited world knowledge. |
| LanguageModel.qwen3_1_7b_instruct | Qwen3 1.7B Instruct model - decently fast and can follow instructions. Limited world knowledge. |
| LanguageModel.qwen3_1_7b_thinking | Qwen3 1.7B Thinking model - decently fast and can follow more difficult instructions, but has a delay before starting to reply. Limited world knowledge. |
| LanguageModel.qwen3_4b_base | Qwen3 4B Base model - slower but excellent for text completion/pattern based completion |
| LanguageModel.qwen3_4b_instruct | Qwen3 4B Instruct model - slower but can follow complex instructions |
| LanguageModel.qwen3_4b_thinking | Qwen3 4B Thinking model - slower but can follow complex and specific instructions, but has a potentially long delay before starting to reply |
| LanguageModel.gemma3_270m_base | Gemma3 270M Base model - tiny, extremely fast, and good for text completion. Very limited world knowledge. |
| LanguageModel.gemma3_270m_instruct | Gemma3 270M Instruct model - tiny, extremely fast, and good for very simple instructions. Very limited world knowledge. |
| LanguageModel.gemma3_1b_base | Gemma3 1B Base model - fast and good for text completion. Has decent world knowledge and multilingual abilities. |
| LanguageModel.gemma3_1b_instruct | Gemma3 1B Instruct model - fast and can follow instructions. Has decent world knowledge and multilingual abilities. |
| LanguageModel.gemma3_4b_base | Gemma3 4B Base model - slower but good for text completion/pattern based completion. Has decent world knowledge and multilingual abilities. |
| LanguageModel.gemma3_4b_instruct | Gemma3 4B Instruct model - slower but can follow complex instructions. Has decent world knowledge and multilingual abilities. |
| Constant | Description |
|---|---|
| LanguageModel.Qwen306BBase | Qwen3 0.6B Base model - small, extremely fast and good for text completion. Very limited world knowledge. |
| LanguageModel.Qwen306BInstruct | Qwen3 0.6B Instruct model (default) - small, extremely fast and can follow simple instructions. Very limited world knowledge. |
| LanguageModel.Qwen306BThinking | Qwen3 0.6B Thinking model - small, extremely fast and can follow more specific instructions, but has a short delay before starting to reply. Very limited world knowledge. |
| LanguageModel.Qwen317BBase | Qwen3 1.7B Base model - decently fast and good for text completion. Limited world knowledge. |
| LanguageModel.Qwen317BInstruct | Qwen3 1.7B Instruct model - decently fast and can follow instructions. Limited world knowledge. |
| LanguageModel.Qwen317BThinking | Qwen3 1.7B Thinking model - decently fast and can follow more difficult instructions, but has a delay before starting to reply. Limited world knowledge. |
| LanguageModel.Qwen34BBase | Qwen3 4B Base model - slower but excellent for text completion/pattern based completion |
| LanguageModel.Qwen34BInstruct | Qwen3 4B Instruct model - slower but can follow complex instructions |
| LanguageModel.Qwen34BThinking | Qwen3 4B Thinking model - slower but can follow complex and specific instructions, but has a potentially long delay before starting to reply |
| LanguageModel.Gemma3270mBase | Gemma3 270M Base model - tiny, extremely fast, and good for text completion. Very limited world knowledge. |
| LanguageModel.Gemma3270mInstruct | Gemma3 270M Instruct model - tiny, extremely fast, and good for very simple instructions. Very limited world knowledge. |
| LanguageModel.Gemma31BBase | Gemma3 1B Base model - fast and good for text completion. Has decent world knowledge and multilingual abilities. |
| LanguageModel.Gemma31BInstruct | Gemma3 1B Instruct model - fast and can follow instructions. Has decent world knowledge and multilingual abilities. |
| LanguageModel.Gemma34BBase | Gemma3 4B Base model - slower but good for text completion/pattern based completion. Has decent world knowledge and multilingual abilities. |
| LanguageModel.Gemma34BInstruct | Gemma3 4B Instruct model - slower but can follow complex instructions. Has decent world knowledge and multilingual abilities. |