Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions docs/docs/06-api-reference/classes/LLMModule.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Class: LLMModule

Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:10](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L10)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:10](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L10)

Module for managing a Large Language Model (LLM) instance.

Expand All @@ -10,7 +10,7 @@ Module for managing a Large Language Model (LLM) instance.

> **new LLMModule**(`optionalCallbacks`): `LLMModule`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:19](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L19)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:20](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L20)

Creates a new instance of `LLMModule` with optional callbacks.

Expand Down Expand Up @@ -43,16 +43,16 @@ A new LLMModule instance.

### configure()

> **configure**(`configuration`): `void`
> **configure**(`config`): `void`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:81](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L81)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:87](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L87)

Configures chat and tool calling and generation settings.
See [Configuring the model](https://docs.swmansion.com/react-native-executorch/docs/hooks/natural-language-processing/useLLM#configuring-the-model) for details.

#### Parameters

##### configuration
##### config

[`LLMConfig`](../interfaces/LLMConfig.md)

Expand All @@ -68,7 +68,7 @@ Configuration object containing `chatConfig`, `toolsConfig`, and `generationConf

> **delete**(): `void`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:174](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L174)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:184](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L184)

Method to delete the model from memory.
Note you cannot delete model while it's generating.
Expand All @@ -84,7 +84,7 @@ You need to interrupt it first and make sure model stopped generation.

> **deleteMessage**(`index`): [`Message`](../interfaces/Message.md)[]
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:130](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L130)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:140](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L140)

Deletes all messages starting with message on `index` position.
After deletion it will call `messageHistoryCallback()` containing new history.
Expand All @@ -110,7 +110,7 @@ The index of the message to delete from history.

> **forward**(`input`): `Promise`\<`string`\>
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:94](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L94)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:104](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L104)

Runs model inference with raw input string.
You need to provide entire conversation and prompt (in correct format and with special tokens!) in input string to this method.
Expand All @@ -137,7 +137,7 @@ The generated response as a string.

> **generate**(`messages`, `tools?`): `Promise`\<`string`\>
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:105](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L105)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:115](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L115)

Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context.

Expand Down Expand Up @@ -167,7 +167,7 @@ The generated response as a string.

> **getGeneratedTokenCount**(): `number`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:147](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L147)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:157](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L157)

Returns the number of tokens generated in the last response.

Expand All @@ -183,7 +183,7 @@ The count of generated tokens.

> **getPromptTokensCount**(): `number`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:156](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L156)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:166](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L166)

Returns the number of prompt tokens in the last message.

Expand All @@ -199,7 +199,7 @@ The count of prompt token.

> **getTotalTokensCount**(): `number`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:165](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L165)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:175](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L175)

Returns the number of total tokens from the previous generation. This is a sum of prompt tokens and generated tokens.

Expand All @@ -215,7 +215,7 @@ The count of prompt and generated tokens.

> **interrupt**(): `void`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:138](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L138)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:148](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L148)

Interrupts model generation. It may return one more token after interrupt.

Expand All @@ -229,7 +229,7 @@ Interrupts model generation. It may return one more token after interrupt.

> **load**(`model`, `onDownloadProgressCallback`): `Promise`\<`void`\>
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:48](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L48)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:49](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L49)

Loads the LLM model and tokenizer.

Expand Down Expand Up @@ -273,7 +273,7 @@ Optional callback to track download progress (value between 0 and 1).

> **sendMessage**(`message`): `Promise`\<[`Message`](../interfaces/Message.md)[]\>
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:117](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L117)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:127](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L127)

Method to add user message to conversation.
After model responds it will call `messageHistoryCallback()` containing both user message and model response.
Expand All @@ -299,7 +299,7 @@ The message string to send.

> **setTokenCallback**(`tokenCallback`): `void`
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:67](https://github.com/software-mansion/react-native-executorch/blob/326d6344894d75625c600d5988666e215a32d466/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L67)
Defined in: [packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:73](https://github.com/software-mansion/react-native-executorch/blob/a9d9b826d75623e7b7d41c2da95ed0c60fbb6424/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts#L73)

Sets new token callback invoked on every token batch.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import { LLMConfig, LLMTool, Message } from '../../types/llm';
*/
export class LLMModule {
private controller: LLMController;
private pendingConfig?: LLMConfig;

/**
* Creates a new instance of `LLMModule` with optional callbacks.
Expand Down Expand Up @@ -57,6 +58,11 @@ export class LLMModule {
...model,
onDownloadProgressCallback,
});

if (this.pendingConfig) {
this.controller.configure(this.pendingConfig);
this.pendingConfig = undefined;
}
}

/**
Expand All @@ -78,8 +84,12 @@ export class LLMModule {
*
* @param configuration - Configuration object containing `chatConfig`, `toolsConfig`, and `generationConfig`.
*/
configure({ chatConfig, toolsConfig, generationConfig }: LLMConfig) {
this.controller.configure({ chatConfig, toolsConfig, generationConfig });
configure(config: LLMConfig) {
if (this.controller.isReady) {
this.controller.configure(config);
} else {
this.pendingConfig = config;
}
}

/**
Expand Down
Loading