Skip to content

Commit 8327c0f

Browse files
sobychackomarkpollack
authored andcommitted
Anthropic Prompt Caching: Align CONVERSATION_HISTORY with Anthropic's incremental caching pattern
This commit updates the CONVERSATION_HISTORY cache strategy to align with Anthropic's official documentation and cookbook examples (https://github.com/anthropics/claude-cookbooks/blob/main/misc/prompt_caching.ipynb) for incremental conversation caching. **Cache breakpoint placement:** - Before: Cache breakpoint on penultimate (second-to-last) user message - After: Cache breakpoint on last user message **Aggregate eligibility:** - Before: Only considered user messages for min content length check - After: Considers all message types (user, assistant, tool) within 20-block lookback window for aggregate eligibility Anthropic's documentation and cookbook demonstrate incremental caching by placing cache_control on the LAST user message: ```python result.append({ "role": "user", "content": [{ "type": "text", "text": turn["content"][0]["text"], "cache_control": {"type": "ephemeral"} # On LAST user message }] }) ``` This pattern is also shown in their official docs: https://docs.claude.com/en/docs/build-with-claude/prompt-caching#large-context-caching-example Anthropic's caching system uses prefix matching to find the longest matching prefix from the cache. By placing cache_control on the last user message, we enable the following incremental caching pattern: ``` Turn 1: Cache [System + User1] Turn 2: Reuse [System + User1], process [Assistant1 + User2], cache [System + User1 + Assistant1 + User2] Turn 3: Reuse [System + User1 + Assistant1 + User2], process [Assistant2 + User3], cache [System + User1 + Assistant1 + User2 + Assistant2 + User3] ``` The cache grows incrementally with each turn, building a larger prefix that can be reused. This is the recommended pattern from Anthropic. The new implementation considers all message types (user, assistant, tool) within the 20-block lookback window when checking minimum content length. This ensures that: - Short user questions don't prevent caching when conversation has long assistant responses - The full conversation context is considered for the 1024+ token minimum - Aligns with Anthropic's note: "The automatic prefix checking only looks back approximately 20 content blocks from each explicit breakpoint" None. This is an implementation detail of the CONVERSATION_HISTORY strategy. The API surface remains unchanged. Users may observe: - Different cache hit patterns (should be more effective) - Cache metrics may show higher cache read tokens as conversations grow - Updated `shouldRespectMinLengthForUserHistoryCaching()` to test aggregate eligibility with combined message lengths - Renamed `shouldApplyCacheControlToLastUserMessageForConversationHistory()` (from `shouldRespectAllButLastUserMessageForUserHistoryCaching`) - Added `shouldDemonstrateIncrementalCachingAcrossMultipleTurns()` integration test showing cache growth pattern across 4 conversation turns - Updated mock test assertions to verify last message has cache_control Updated anthropic-chat.adoc to clarify: - CONVERSATION_HISTORY strategy description now mentions incremental prefix caching - Code example comments updated to reflect cache breakpoint on last user message - Implementation Details section expanded with explanation of prefix matching and aggregate eligibility checking - Anthropic Prompt Caching Docs: https://docs.claude.com/en/docs/build-with-claude/prompt-caching - Anthropic Cookbook: https://github.com/anthropics/claude-cookbooks/blob/main/misc/prompt_caching.ipynb Signed-off-by: Soby Chacko <soby.chacko@broadcom.com>
1 parent 838801f commit 8327c0f

File tree

4 files changed

+207
-38
lines changed

4 files changed

+207
-38
lines changed

models/spring-ai-anthropic/src/main/java/org/springframework/ai/anthropic/AnthropicChatModel.java

Lines changed: 20 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -618,16 +618,16 @@ private List<AnthropicMessage> buildMessages(Prompt prompt, CacheEligibilityReso
618618
List<ContentBlock> contentBlocks = new ArrayList<>();
619619
String content = message.getText();
620620
// For conversation history caching, apply cache control to the
621-
// message immediately before the last user message.
622-
boolean isPenultimateUserMessage = (lastUserIndex > 0) && (i == lastUserIndex - 1);
621+
// last user message to cache the entire conversation up to that point.
622+
boolean isLastUserMessage = (lastUserIndex >= 0) && (i == lastUserIndex);
623623
ContentBlock contentBlock = new ContentBlock(content);
624-
if (isPenultimateUserMessage && cacheEligibilityResolver.isCachingEnabled()) {
625-
// Combine text from all user messages except the last one (current
626-
// question)
627-
// as the basis for cache eligibility checks
628-
String combinedUserMessagesText = combineEligibleUserMessagesText(allMessages, lastUserIndex);
624+
if (isLastUserMessage && cacheEligibilityResolver.isCachingEnabled()) {
625+
// Combine text from all messages (user, assistant, tool) up to and
626+
// including the last user message as the basis for cache eligibility
627+
// checks
628+
String combinedMessagesText = combineEligibleMessagesText(allMessages, lastUserIndex);
629629
contentBlocks.add(cacheAwareContentBlock(contentBlock, messageType, cacheEligibilityResolver,
630-
combinedUserMessagesText));
630+
combinedMessagesText));
631631
}
632632
else {
633633
contentBlocks.add(contentBlock);
@@ -676,19 +676,21 @@ else if (messageType == MessageType.TOOL) {
676676
return result;
677677
}
678678

679-
private String combineEligibleUserMessagesText(List<Message> userMessages, int lastUserIndex) {
680-
List<Message> userMessagesForEligibility = new ArrayList<>();
679+
private String combineEligibleMessagesText(List<Message> allMessages, int lastUserIndex) {
681680
// Only 20 content blocks are considered by anthropic, so limit the number of
682-
// message content to consider
683-
int startIndex = Math.max(0, lastUserIndex - 20);
684-
for (int i = startIndex; i < lastUserIndex; i++) {
685-
Message message = userMessages.get(i);
686-
if (message.getMessageType() == MessageType.USER) {
687-
userMessagesForEligibility.add(message);
681+
// message content to consider. We include all message types (user, assistant,
682+
// tool)
683+
// up to and including the last user message for aggregate eligibility checking.
684+
int startIndex = Math.max(0, lastUserIndex - 19);
685+
int endIndex = Math.min(allMessages.size(), lastUserIndex + 1);
686+
StringBuilder sb = new StringBuilder();
687+
for (int i = startIndex; i < endIndex; i++) {
688+
Message message = allMessages.get(i);
689+
String text = message.getText();
690+
if (StringUtils.hasText(text)) {
691+
sb.append(text);
688692
}
689693
}
690-
StringBuilder sb = new StringBuilder();
691-
userMessagesForEligibility.stream().map(Message::getText).filter(StringUtils::hasText).forEach(sb::append);
692694
return sb.toString();
693695
}
694696

models/spring-ai-anthropic/src/test/java/org/springframework/ai/anthropic/AnthropicPromptCachingIT.java

Lines changed: 174 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -280,17 +280,20 @@ void shouldRespectMinLengthForSystemCaching() {
280280

281281
@Test
282282
void shouldRespectMinLengthForUserHistoryCaching() {
283-
// Two-user-message prompt; only the first (history tail) is eligible.
283+
// Two-user-message prompt; aggregate length check applies
284284
String userMessage = loadPrompt("system-only-cache-prompt.txt");
285-
List<Message> messages = List.of(new UserMessage(userMessage),
286-
new UserMessage("Please answer this question succinctly"));
285+
String secondUserMessage = "Please answer this question succinctly";
286+
List<Message> messages = List.of(new UserMessage(userMessage), new UserMessage(secondUserMessage));
287+
288+
// Calculate combined length of both messages for aggregate checking
289+
int combinedLength = userMessage.length() + secondUserMessage.length();
287290

288-
// Set USER min length high so caching should not apply
291+
// Set USER min length higher than combined length so caching should not apply
289292
AnthropicChatOptions noCacheOptions = AnthropicChatOptions.builder()
290293
.model(AnthropicApi.ChatModel.CLAUDE_SONNET_4_0.getValue())
291294
.cacheOptions(AnthropicCacheOptions.builder()
292295
.strategy(AnthropicCacheStrategy.CONVERSATION_HISTORY)
293-
.messageTypeMinContentLength(MessageType.USER, userMessage.length() + 1)
296+
.messageTypeMinContentLength(MessageType.USER, combinedLength + 1)
294297
.build())
295298
.maxTokens(80)
296299
.temperature(0.2)
@@ -303,12 +306,12 @@ void shouldRespectMinLengthForUserHistoryCaching() {
303306
assertThat(noCacheUsage.cacheCreationInputTokens()).isEqualTo(0);
304307
assertThat(noCacheUsage.cacheReadInputTokens()).isEqualTo(0);
305308

306-
// Now allow caching by lowering the USER min length
309+
// Now allow caching by lowering the USER min length below combined length
307310
AnthropicChatOptions cacheOptions = AnthropicChatOptions.builder()
308311
.model(AnthropicApi.ChatModel.CLAUDE_SONNET_4_0.getValue())
309312
.cacheOptions(AnthropicCacheOptions.builder()
310313
.strategy(AnthropicCacheStrategy.CONVERSATION_HISTORY)
311-
.messageTypeMinContentLength(MessageType.USER, userMessage.length() - 1)
314+
.messageTypeMinContentLength(MessageType.USER, combinedLength - 1)
312315
.build())
313316
.maxTokens(80)
314317
.temperature(0.2)
@@ -319,20 +322,20 @@ void shouldRespectMinLengthForUserHistoryCaching() {
319322
AnthropicApi.Usage cacheUsage = getAnthropicUsage(cacheResponse);
320323
assertThat(cacheUsage).isNotNull();
321324
assertThat(cacheUsage.cacheCreationInputTokens())
322-
.as("Expect some cache creation tokens when USER history tail is cached")
325+
.as("Expect some cache creation tokens when aggregate content meets min length")
323326
.isGreaterThan(0);
324327
}
325328

326329
@Test
327-
void shouldRespectAllButLastUserMessageForUserHistoryCaching() {
328-
// Three-user-message prompt; only the first (history tail) is eligible.
330+
void shouldApplyCacheControlToLastUserMessageForConversationHistory() {
331+
// Three-user-message prompt; the last user message will have cache_control.
329332
String userMessage = loadPrompt("system-only-cache-prompt.txt");
330333
List<Message> messages = List.of(new UserMessage(userMessage),
331334
new UserMessage("Additional content to exceed min length"),
332335
new UserMessage("Please answer this question succinctly"));
333336

334-
// The combined length of the first two USER messages exceeds the min length,
335-
// so caching should apply
337+
// The combined length of all three USER messages (including the last) exceeds
338+
// the min length, so caching should apply
336339
AnthropicChatOptions cacheOptions = AnthropicChatOptions.builder()
337340
.model(AnthropicApi.ChatModel.CLAUDE_SONNET_4_0.getValue())
338341
.cacheOptions(AnthropicCacheOptions.builder()
@@ -450,4 +453,163 @@ void shouldHandleMultipleCacheStrategiesInSession() {
450453
}
451454
}
452455

456+
@Test
457+
void shouldDemonstrateIncrementalCachingAcrossMultipleTurns() {
458+
// This test demonstrates how caching grows incrementally with each turn
459+
// NOTE: Anthropic requires 1024+ tokens for caching to activate
460+
// We use a large system message to ensure we cross this threshold
461+
462+
// Large system prompt to ensure we exceed 1024 token minimum for caching
463+
String largeSystemPrompt = loadPrompt("system-only-cache-prompt.txt");
464+
465+
AnthropicChatOptions options = AnthropicChatOptions.builder()
466+
.model(AnthropicApi.ChatModel.CLAUDE_SONNET_4_0.getValue())
467+
.cacheOptions(AnthropicCacheOptions.builder()
468+
.strategy(AnthropicCacheStrategy.CONVERSATION_HISTORY)
469+
// Disable min content length since we're using aggregate check
470+
.messageTypeMinContentLength(MessageType.USER, 0)
471+
.build())
472+
.maxTokens(200)
473+
.temperature(0.3)
474+
.build();
475+
476+
List<Message> conversationHistory = new ArrayList<>();
477+
// Add system message to provide enough tokens for caching
478+
conversationHistory.add(new SystemMessage(largeSystemPrompt));
479+
480+
// Turn 1: Initial question
481+
logger.info("\n=== TURN 1: Initial Question ===");
482+
conversationHistory.add(new UserMessage("What is quantum computing? Please explain the basics."));
483+
484+
ChatResponse turn1 = this.chatModel.call(new Prompt(conversationHistory, options));
485+
assertThat(turn1).isNotNull();
486+
String assistant1Response = turn1.getResult().getOutput().getText();
487+
conversationHistory.add(turn1.getResult().getOutput());
488+
489+
AnthropicApi.Usage usage1 = getAnthropicUsage(turn1);
490+
assertThat(usage1).isNotNull();
491+
logger.info("Turn 1 - User: '{}'", conversationHistory.get(0).getText().substring(0, 50) + "...");
492+
logger.info("Turn 1 - Assistant: '{}'",
493+
assistant1Response.substring(0, Math.min(100, assistant1Response.length())) + "...");
494+
logger.info("Turn 1 - Input tokens: {}", usage1.inputTokens());
495+
logger.info("Turn 1 - Cache creation tokens: {}", usage1.cacheCreationInputTokens());
496+
logger.info("Turn 1 - Cache read tokens: {}", usage1.cacheReadInputTokens());
497+
498+
// Note: First turn may not create cache if total tokens < 1024 (Anthropic's
499+
// minimum)
500+
// We'll track whether caching starts in turn 1 or later
501+
boolean cachingStarted = usage1.cacheCreationInputTokens() > 0;
502+
logger.info("Turn 1 - Caching started: {}", cachingStarted);
503+
assertThat(usage1.cacheReadInputTokens()).as("Turn 1 should not read cache (no previous cache)").isEqualTo(0);
504+
505+
// Turn 2: Follow-up question
506+
logger.info("\n=== TURN 2: Follow-up Question ===");
507+
conversationHistory.add(new UserMessage("How does quantum entanglement work in this context?"));
508+
509+
ChatResponse turn2 = this.chatModel.call(new Prompt(conversationHistory, options));
510+
assertThat(turn2).isNotNull();
511+
String assistant2Response = turn2.getResult().getOutput().getText();
512+
conversationHistory.add(turn2.getResult().getOutput());
513+
514+
AnthropicApi.Usage usage2 = getAnthropicUsage(turn2);
515+
assertThat(usage2).isNotNull();
516+
logger.info("Turn 2 - User: '{}'", conversationHistory.get(2).getText());
517+
logger.info("Turn 2 - Assistant: '{}'",
518+
assistant2Response.substring(0, Math.min(100, assistant2Response.length())) + "...");
519+
logger.info("Turn 2 - Input tokens: {}", usage2.inputTokens());
520+
logger.info("Turn 2 - Cache creation tokens: {}", usage2.cacheCreationInputTokens());
521+
logger.info("Turn 2 - Cache read tokens: {}", usage2.cacheReadInputTokens());
522+
523+
// Second turn: If caching started in turn 1, we should see cache reads
524+
// Otherwise, caching might start here if we've accumulated enough tokens
525+
if (cachingStarted) {
526+
assertThat(usage2.cacheReadInputTokens()).as("Turn 2 should read cache from Turn 1").isGreaterThan(0);
527+
}
528+
// Update caching status
529+
cachingStarted = cachingStarted || usage2.cacheCreationInputTokens() > 0;
530+
531+
// Turn 3: Another follow-up
532+
logger.info("\n=== TURN 3: Deeper Question ===");
533+
conversationHistory
534+
.add(new UserMessage("Can you give me a practical example of quantum computing application?"));
535+
536+
ChatResponse turn3 = this.chatModel.call(new Prompt(conversationHistory, options));
537+
assertThat(turn3).isNotNull();
538+
String assistant3Response = turn3.getResult().getOutput().getText();
539+
conversationHistory.add(turn3.getResult().getOutput());
540+
541+
AnthropicApi.Usage usage3 = getAnthropicUsage(turn3);
542+
assertThat(usage3).isNotNull();
543+
logger.info("Turn 3 - User: '{}'", conversationHistory.get(4).getText());
544+
logger.info("Turn 3 - Assistant: '{}'",
545+
assistant3Response.substring(0, Math.min(100, assistant3Response.length())) + "...");
546+
logger.info("Turn 3 - Input tokens: {}", usage3.inputTokens());
547+
logger.info("Turn 3 - Cache creation tokens: {}", usage3.cacheCreationInputTokens());
548+
logger.info("Turn 3 - Cache read tokens: {}", usage3.cacheReadInputTokens());
549+
550+
// Third turn: Should read cache if caching has started
551+
if (cachingStarted) {
552+
assertThat(usage3.cacheReadInputTokens()).as("Turn 3 should read cache if caching has started")
553+
.isGreaterThan(0);
554+
}
555+
// Update caching status
556+
cachingStarted = cachingStarted || usage3.cacheCreationInputTokens() > 0;
557+
558+
// Turn 4: Final question
559+
logger.info("\n=== TURN 4: Final Question ===");
560+
conversationHistory.add(new UserMessage("What are the limitations of current quantum computers?"));
561+
562+
ChatResponse turn4 = this.chatModel.call(new Prompt(conversationHistory, options));
563+
assertThat(turn4).isNotNull();
564+
String assistant4Response = turn4.getResult().getOutput().getText();
565+
conversationHistory.add(turn4.getResult().getOutput());
566+
567+
AnthropicApi.Usage usage4 = getAnthropicUsage(turn4);
568+
assertThat(usage4).isNotNull();
569+
logger.info("Turn 4 - User: '{}'", conversationHistory.get(6).getText());
570+
logger.info("Turn 4 - Assistant: '{}'",
571+
assistant4Response.substring(0, Math.min(100, assistant4Response.length())) + "...");
572+
logger.info("Turn 4 - Input tokens: {}", usage4.inputTokens());
573+
logger.info("Turn 4 - Cache creation tokens: {}", usage4.cacheCreationInputTokens());
574+
logger.info("Turn 4 - Cache read tokens: {}", usage4.cacheReadInputTokens());
575+
576+
// Fourth turn: By now we should definitely have caching working
577+
assertThat(cachingStarted).as("Caching should have started by turn 4").isTrue();
578+
if (cachingStarted) {
579+
assertThat(usage4.cacheReadInputTokens()).as("Turn 4 should read cache").isGreaterThan(0);
580+
}
581+
582+
// Summary logging
583+
logger.info("\n=== CACHING SUMMARY ===");
584+
logger.info("Turn 1 - Created: {}, Read: {}", usage1.cacheCreationInputTokens(), usage1.cacheReadInputTokens());
585+
logger.info("Turn 2 - Created: {}, Read: {}", usage2.cacheCreationInputTokens(), usage2.cacheReadInputTokens());
586+
logger.info("Turn 3 - Created: {}, Read: {}", usage3.cacheCreationInputTokens(), usage3.cacheReadInputTokens());
587+
logger.info("Turn 4 - Created: {}, Read: {}", usage4.cacheCreationInputTokens(), usage4.cacheReadInputTokens());
588+
589+
// Demonstrate incremental growth pattern
590+
logger.info("\n=== CACHE GROWTH PATTERN ===");
591+
logger.info("Cache read tokens grew from {} → {} → {} → {}", usage1.cacheReadInputTokens(),
592+
usage2.cacheReadInputTokens(), usage3.cacheReadInputTokens(), usage4.cacheReadInputTokens());
593+
logger.info("This demonstrates incremental prefix caching: each turn builds on the previous cache");
594+
595+
// Verify that once caching starts, cache reads continue to grow
596+
List<Integer> cacheReads = List.of(usage1.cacheReadInputTokens(), usage2.cacheReadInputTokens(),
597+
usage3.cacheReadInputTokens(), usage4.cacheReadInputTokens());
598+
int firstNonZeroIndex = -1;
599+
for (int i = 0; i < cacheReads.size(); i++) {
600+
if (cacheReads.get(i) > 0) {
601+
firstNonZeroIndex = i;
602+
break;
603+
}
604+
}
605+
if (firstNonZeroIndex >= 0 && firstNonZeroIndex < cacheReads.size() - 1) {
606+
// Verify each subsequent turn has cache reads >= previous
607+
for (int i = firstNonZeroIndex + 1; i < cacheReads.size(); i++) {
608+
assertThat(cacheReads.get(i))
609+
.as("Cache reads should grow or stay same once caching starts (turn %d vs turn %d)", i + 1, i)
610+
.isGreaterThanOrEqualTo(cacheReads.get(i - 1));
611+
}
612+
}
613+
}
614+
453615
}

models/spring-ai-anthropic/src/test/java/org/springframework/ai/anthropic/AnthropicPromptCachingMockTest.java

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -331,11 +331,11 @@ void testConversationHistoryCacheStrategy() throws Exception {
331331
assertThat(messagesArray.isArray()).isTrue();
332332
assertThat(messagesArray.size()).isGreaterThan(1);
333333

334-
// Verify the second-to-last message has cache control (conversation history)
335-
if (messagesArray.size() >= 2) {
336-
JsonNode secondToLastMessage = messagesArray.get(messagesArray.size() - 2);
337-
assertThat(secondToLastMessage.has("content")).isTrue();
338-
JsonNode contentArray = secondToLastMessage.get("content");
334+
// Verify the last message has cache control (conversation history)
335+
if (messagesArray.size() >= 1) {
336+
JsonNode lastMessage = messagesArray.get(messagesArray.size() - 1);
337+
assertThat(lastMessage.has("content")).isTrue();
338+
JsonNode contentArray = lastMessage.get("content");
339339
if (contentArray.isArray() && contentArray.size() > 0) {
340340
JsonNode lastContentBlock = contentArray.get(contentArray.size() - 1);
341341
assertThat(lastContentBlock.has("cache_control")).isTrue();

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/anthropic-chat.adoc

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -213,9 +213,9 @@ Different models have different minimum token thresholds for cache effectiveness
213213
Spring AI provides strategic cache placement through the `AnthropicCacheStrategy` enum:
214214

215215
* `NONE`: Disables prompt caching completely
216-
* `SYSTEM_ONLY`: Caches only the system message content
216+
* `SYSTEM_ONLY`: Caches only the system message content
217217
* `SYSTEM_AND_TOOLS`: Caches system message and the last tool definition
218-
* `CONVERSATION_HISTORY`: Caches conversation history in chat memory scenarios
218+
* `CONVERSATION_HISTORY`: Caches the entire conversation history by placing cache breakpoints on tools (if present), system message, and the last user message. This enables incremental prefix caching for multi-turn conversations
219219

220220
This strategic approach ensures optimal cache breakpoint placement while staying within Anthropic's 4-breakpoint limit.
221221

@@ -272,7 +272,7 @@ ChatResponse response = chatModel.call(
272272

273273
[source,java]
274274
----
275-
// Cache conversation history with ChatClient and memory (latest user question is not cached)
275+
// Cache conversation history with ChatClient and memory (cache breakpoint on last user message)
276276
ChatClient chatClient = ChatClient.builder(chatModel)
277277
.defaultSystem("You are a personalized career counselor...")
278278
.defaultAdvisors(MessageChatMemoryAdvisor.builder(chatMemory)
@@ -620,13 +620,18 @@ Even small changes will require a new cache entry.
620620
The prompt caching implementation in Spring AI follows these key design principles:
621621

622622
1. **Strategic Cache Placement**: Cache breakpoints are automatically placed at optimal locations based on the chosen strategy, ensuring compliance with Anthropic's 4-breakpoint limit.
623+
- `CONVERSATION_HISTORY` places cache breakpoints on: tools (if present), system message, and the last user message
624+
- This enables Anthropic's prefix matching to incrementally cache the growing conversation history
625+
- Each turn builds on the previous cached prefix, maximizing cache reuse
623626

624627
2. **Provider Portability**: Cache configuration is done through `AnthropicChatOptions` rather than individual messages, preserving compatibility when switching between different AI providers.
625628

626629
3. **Thread Safety**: The cache breakpoint tracking is implemented with thread-safe mechanisms to handle concurrent requests correctly.
627630

628631
4. **Automatic Content Ordering**: The implementation ensures proper on-the-wire ordering of JSON content blocks and cache controls according to Anthropic's API requirements.
629632

633+
5. **Aggregate Eligibility Checking**: For `CONVERSATION_HISTORY`, the implementation considers all message types (user, assistant, tool) within the last ~20 content blocks when determining if the combined content meets the minimum token threshold for caching.
634+
630635
=== Future Enhancements
631636

632637
The current cache strategies are designed to handle **90% of common use cases** effectively. For applications requiring more granular control, future enhancements may include:

0 commit comments

Comments
 (0)