Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
91 commits
Select commit Hold shift + click to select a range
11b63e6
debug an error function name
tangg555 Oct 20, 2025
72e8f39
feat: Add DynamicCache compatibility for different transformers versions
tangg555 Oct 20, 2025
5702870
feat: implement APIAnalyzerForScheduler for memory operations
tangg555 Oct 21, 2025
4655b41
feat: Add search_ws API endpoint and enhance API analyzer functionality
tangg555 Oct 21, 2025
c20736c
fix: resolve test failures and warnings in test suite
tangg555 Oct 21, 2025
da72e7e
feat: add a test_robustness execution to test thread pool execution
tangg555 Oct 21, 2025
5b9b1e4
feat: optimize scheduler configuration and API search functionality
tangg555 Oct 22, 2025
6dac11e
feat: Add Redis auto-initialization with fallback strategies
tangg555 Oct 22, 2025
a207bf4
feat: add database connection management to ORM module
tangg555 Oct 24, 2025
8c1cc04
remove part of test
tangg555 Oct 24, 2025
f2b0da4
feat: add Redis-based ORM with multiprocess synchronization
tangg555 Oct 24, 2025
f0e8aab
fix: resolve scheduler module import and Redis integration issues
tangg555 Oct 24, 2025
731f00d
revise naive memcube creation in server router
tangg555 Oct 25, 2025
6d442fb
remove long-time tests in test_scheduler
tangg555 Oct 25, 2025
157f858
remove redis test which needs .env
tangg555 Oct 25, 2025
c483011
refactor all codes about mixture search with scheduler
tangg555 Oct 25, 2025
b81b82e
fix: resolve Redis API synchronization issues and implement search AP…
tangg555 Oct 26, 2025
90d1a0b
remove a test for api module
tangg555 Oct 26, 2025
1de72cf
revise to pass the test suite
tangg555 Oct 26, 2025
c72858e
addressed all conflicts
tangg555 Oct 27, 2025
3245376
address some bugs to make mix_search normally running
tangg555 Oct 27, 2025
57482cf
modify codes according to evaluation logs
tangg555 Oct 27, 2025
e4b8313
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Oct 27, 2025
011d248
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Oct 28, 2025
8c8d672
feat: Optimize mixture search and enhance API client
tangg555 Oct 28, 2025
aabad8d
feat: Add conversation_turn tracking for session-based memory search
tangg555 Oct 28, 2025
3faa5c3
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Oct 28, 2025
c6376cd
adress time bug in monitor
tangg555 Oct 29, 2025
bd0b234
revise simple tree
tangg555 Oct 29, 2025
5332d12
add mode to evaluation client; rewrite print to logger.info in db files
tangg555 Oct 29, 2025
aee13ba
feat: 1. add redis queue for scheduler 2. finish the code related to …
tangg555 Nov 5, 2025
f957967
debug the working memory code
tangg555 Nov 5, 2025
f520cca
addressed conflicts to merge
tangg555 Nov 5, 2025
a3f6636
addressed a range of bugs to make scheduler running correctly
tangg555 Nov 5, 2025
47e9851
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Nov 5, 2025
161af12
remove test_dispatch_parallel test
tangg555 Nov 5, 2025
1d8d14b
print change to logger.info
tangg555 Nov 5, 2025
00e3a75
addressed conflicts
tangg555 Nov 6, 2025
2852e56
adjucted the core code related to fine and mixture apis
tangg555 Nov 17, 2025
5d3cf45
addressed conflicts
tangg555 Nov 17, 2025
ab71f17
feat: create task queue to wrap local queue and redis queue. queue no…
tangg555 Nov 18, 2025
7665cda
fix bugs: debug bugs about internet trigger
tangg555 Nov 18, 2025
3559323
debug get searcher mode
tangg555 Nov 18, 2025
7c8e0d0
feat: add manual internet
fridayL Nov 18, 2025
27b0971
Merge branch 'feat/redis_scheduler' of https://github.com/MemTensor/M…
fridayL Nov 18, 2025
94d456b
Fix: fix code format
fridayL Nov 18, 2025
87b5358
feat: add strategy for fine search
tangg555 Nov 18, 2025
127fdc7
debug redis queue
tangg555 Nov 18, 2025
0911ced
debug redis queue
tangg555 Nov 18, 2025
d1a7261
fix bugs: completely addressed bugs about redis queue
tangg555 Nov 18, 2025
232be6f
refactor: add searcher to handler_init; remove info log from task_queue
tangg555 Nov 19, 2025
d16a7c8
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Nov 19, 2025
bc7236f
refactor: modify analyzer
tangg555 Nov 19, 2025
afaf8df
refactor: revise locomo_eval to make it support llm other than gpt-4o…
tangg555 Nov 19, 2025
0b02d3c
feat: develop advanced searcher with deep search
tangg555 Nov 20, 2025
2097eae
feat: finish a complete version of deep search
tangg555 Nov 21, 2025
aff2932
refactor: refactor deep search feature, now only allowing one-round d…
tangg555 Nov 24, 2025
4226a77
feat: implement the feature of get_tasks_status, but completed tasks …
tangg555 Nov 24, 2025
e27483c
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Nov 24, 2025
51964ec
debuging merged code; searching memories have bugs
tangg555 Nov 24, 2025
1e28ee5
change logging level
tangg555 Nov 24, 2025
e0001ea
debug api evaluation
tangg555 Nov 24, 2025
bae7022
fix bugs: change top to top_k
tangg555 Nov 24, 2025
d6cf824
Merge remote-tracking branch 'upstream/dev' into dev
tangg555 Nov 24, 2025
742df4e
change log
tangg555 Nov 24, 2025
9b310c4
refactor: rewrite deep search to make it work better
tangg555 Nov 25, 2025
7e4cfc5
change num_users
tangg555 Nov 26, 2025
c0cadac
feat: developed and test task broker and orchestrator
tangg555 Nov 26, 2025
e0eb490
Fix: Include task_id in ScheduleMessageItem serialization
Nov 29, 2025
2606fc7
Fix(Scheduler): Correct event log creation and task_id serialization
Nov 29, 2025
b3a6f1b
Feat(Scheduler): Add conditional detailed logging for KB updates
Nov 29, 2025
4b2cc2f
Fix(Scheduler): Correct create_event_log call sites
Nov 29, 2025
d8726ec
Fix(Scheduler): Deserialize task_id in ScheduleMessageItem.from_dict
Nov 29, 2025
b8cc42a
Refactor(Config): Centralize RabbitMQ config override logic
Nov 29, 2025
b6ebee6
Revert "Refactor(Config): Centralize RabbitMQ config override logic"
Nov 29, 2025
702d3e1
Fix(Redis): Convert None task_id to empty string during serialization
Nov 29, 2025
975e585
Feat(Log): Add diagnostic log to /product/add endpoint
Nov 29, 2025
bceaf68
Merge branch 'dev' into hotfix/task-id-loss
glin93 Nov 29, 2025
82a95c4
Feat(Log): Add comprehensive diagnostic logs for /product/add flow
Nov 29, 2025
c5631cc
Feat(Log): Add comprehensive diagnostic logs for /product/add flow an…
Nov 29, 2025
600fe24
Fix(rabbitmq): Use env vars for KB updates and improve logging
Nov 29, 2025
1da7c71
Fix(rabbitmq): Explicitly use MEMSCHEDULER_RABBITMQ_EXCHANGE_NAME and…
Nov 29, 2025
f32399b
Fix(add_handler): Update diagnostic log timestamp
Nov 29, 2025
42fea63
Fix(add_handler): Update diagnostic log timestamp again (auto-updated)
Nov 29, 2025
003a169
Update default scheduler redis stream prefix
Nov 29, 2025
6b5d5c6
Update diagnostic timestamp in add handler
Nov 29, 2025
5339b08
Allow optional log_content in scheduler event log
Nov 29, 2025
e1304c1
feat: new examples to test scheduelr
tangg555 Dec 1, 2025
045d154
feat: fair scheduler and refactor of search function
tangg555 Dec 1, 2025
85611c8
Merge remote-tracking branch 'upstream/hotfix/task-id-loss' into dev
tangg555 Dec 1, 2025
4aaeb54
fix bugs: address bugs caused by outdated test code
tangg555 Dec 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added dump.rdb
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ config:
act_mem_update_interval: 30
context_window_size: 10
thread_pool_max_workers: 5
consume_interval_seconds: 1
consume_interval_seconds: 0.01
working_mem_monitor_capacity: 20
activation_mem_monitor_capacity: 5
enable_parallel_dispatch: true
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ mem_scheduler:
act_mem_update_interval: 30
context_window_size: 10
thread_pool_max_workers: 10
consume_interval_seconds: 1
consume_interval_seconds: 0.01
working_mem_monitor_capacity: 20
activation_mem_monitor_capacity: 5
enable_parallel_dispatch: true
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ mem_scheduler:
act_mem_update_interval: 30
context_window_size: 10
thread_pool_max_workers: 10
consume_interval_seconds: 1
consume_interval_seconds: 0.01
working_mem_monitor_capacity: 20
activation_mem_monitor_capacity: 5
enable_parallel_dispatch: true
Expand Down
88 changes: 88 additions & 0 deletions examples/mem_scheduler/task_fair_schedule.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
import sys

from collections import defaultdict
from pathlib import Path

from memos.api.routers.server_router import mem_scheduler
from memos.mem_scheduler.schemas.message_schemas import ScheduleMessageItem


FILE_PATH = Path(__file__).absolute()
BASE_DIR = FILE_PATH.parent.parent.parent
sys.path.insert(0, str(BASE_DIR))


def make_message(user_id: str, mem_cube_id: str, label: str, idx: int | str) -> ScheduleMessageItem:
return ScheduleMessageItem(
item_id=f"{user_id}:{mem_cube_id}:{label}:{idx}",
user_id=user_id,
mem_cube_id=mem_cube_id,
label=label,
content=f"msg-{idx} for {user_id}/{mem_cube_id}/{label}",
)


def seed_messages_for_test_fairness(queue, combos, per_stream):
# send overwhelm message by one user
(u, c, label) = combos[0]
task_target = 100
print(f"{u}:{c}:{label} submit {task_target} messages")
for i in range(task_target):
msg = make_message(u, c, label, f"overwhelm_{i}")
queue.submit_messages(msg)

for u, c, label in combos:
print(f"{u}:{c}:{label} submit {per_stream} messages")
for i in range(per_stream):
msg = make_message(u, c, label, i)
queue.submit_messages(msg)
print("======= seed_messages Done ===========")


def count_by_stream(messages):
counts = defaultdict(int)
for m in messages:
key = f"{m.user_id}:{m.mem_cube_id}:{m.label}"
counts[key] += 1
return counts


def run_fair_redis_schedule(batch_size: int = 3):
print("=== Redis Fairness Demo ===")
print(f"use_redis_queue: {mem_scheduler.use_redis_queue}")
mem_scheduler.consume_batch = batch_size
queue = mem_scheduler.memos_message_queue

# Isolate and clear queue
queue.debug_mode_on(debug_stream_prefix="fair_redis_schedule")
queue.clear()

# Define multiple streams: (user_id, mem_cube_id, task_label)
combos = [
("u1", "u1", "labelX"),
("u1", "u1", "labelY"),
("u2", "u2", "labelX"),
("u2", "u2", "labelY"),
]
per_stream = 5

# Seed messages evenly across streams
seed_messages_for_test_fairness(queue, combos, per_stream)

# Compute target batch size (fair split across streams)
print(f"Request batch_size={batch_size} for {len(combos)} streams")

for _ in range(len(combos)):
# Fetch one brokered pack
msgs = queue.get_messages(batch_size=batch_size)
print(f"Fetched {len(msgs)} messages in first pack")

# Check fairness: counts per stream
counts = count_by_stream(msgs)
for k in sorted(counts):
print(f"{k}: {counts[k]}")


if __name__ == "__main__":
# task 1 fair redis schedule
run_fair_redis_schedule()
86 changes: 86 additions & 0 deletions examples/mem_scheduler/task_stop_rerun.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
from pathlib import Path
from time import sleep

# Note: we skip API handler status/wait utilities in this demo
from memos.api.routers.server_router import mem_scheduler
from memos.mem_scheduler.schemas.message_schemas import ScheduleMessageItem


# Debug: Print scheduler configuration
print("=== Scheduler Configuration Debug ===")
print(f"Scheduler type: {type(mem_scheduler).__name__}")
print(f"Config: {mem_scheduler.config}")
print(f"use_redis_queue: {mem_scheduler.use_redis_queue}")
print(f"Queue type: {type(mem_scheduler.memos_message_queue).__name__}")
print(f"Queue maxsize: {getattr(mem_scheduler.memos_message_queue, 'maxsize', 'N/A')}")
print("=====================================\n")

queue = mem_scheduler.memos_message_queue
queue.debug_mode_on(debug_stream_prefix="task_stop_rerun")


# Define a handler function
def my_test_handler(messages: list[ScheduleMessageItem]):
print(f"My test handler received {len(messages)} messages: {[one.item_id for one in messages]}")
for msg in messages:
# Create a file named by task_id (use item_id as numeric id 0..99)
task_id = str(msg.item_id)
file_path = tmp_dir / f"{task_id}.txt"
try:
print(f"writing {file_path}...")
file_path.write_text(f"Task {task_id} processed.\n")
sleep(5)
except Exception as e:
print(f"Failed to write {file_path}: {e}")


def submit_tasks():
mem_scheduler.memos_message_queue.clear()

# Create 100 messages (task_id 0..99)
users = ["user_A", "user_B"]
messages_to_send = [
ScheduleMessageItem(
item_id=str(i),
user_id=users[i % 2],
mem_cube_id="test_mem_cube",
label=TEST_HANDLER_LABEL,
content=f"Create file for task {i}",
)
for i in range(100)
]
# Submit messages in batch and print completion
print(f"Submitting {len(messages_to_send)} messages to the scheduler...")
mem_scheduler.memos_message_queue.submit_messages(messages_to_send)
print(f"Task submission done! tasks in queue: {mem_scheduler.get_tasks_status()}")


# Register the handler
TEST_HANDLER_LABEL = "test_handler"
mem_scheduler.register_handlers({TEST_HANDLER_LABEL: my_test_handler})


tmp_dir = Path("./tmp")
tmp_dir.mkdir(exist_ok=True)

# Test stop-and-restart: if tmp already has >1 files, skip submission and print info
existing_count = len(list(Path("tmp").glob("*.txt"))) if Path("tmp").exists() else 0
if existing_count > 1:
print(f"Skip submission: found {existing_count} files in tmp (>1), continue processing")
else:
submit_tasks()

# 6. Wait until tmp has 100 files or timeout
poll_interval = 1
expected = 100
tmp_dir = Path("tmp")
while mem_scheduler.get_tasks_status()["remaining"] != 0:
count = len(list(tmp_dir.glob("*.txt"))) if tmp_dir.exists() else 0
user_status_running = mem_scheduler.get_tasks_status()
print(f"[Monitor] user_status_running: {user_status_running}; Files in tmp: {count}/{expected}")
sleep(poll_interval)
print(f"[Result] Final files in tmp: {len(list(tmp_dir.glob('*.txt')))})")

# 7. Stop the scheduler
print("Stopping the scheduler...")
mem_scheduler.stop()
4 changes: 3 additions & 1 deletion src/memos/api/handlers/add_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,9 @@ def handle_add_memories(self, add_req: APIADDRequest) -> MemoryResponse:
Returns:
MemoryResponse with added memory information
"""
self.logger.info(f"[AddHandler] Add Req is: {add_req}")
self.logger.info(
f"[DIAGNOSTIC] server_router -> add_handler.handle_add_memories called (Modified at 2025-11-29 18:46). Full request: {add_req.model_dump_json(indent=2)}"
)

if add_req.info:
exclude_fields = list_all_fields()
Expand Down
4 changes: 2 additions & 2 deletions src/memos/api/handlers/base_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from typing import Any

from memos.log import get_logger
from memos.mem_scheduler.base_scheduler import BaseScheduler
from memos.mem_scheduler.optimized_scheduler import OptimizedScheduler
from memos.memories.textual.tree_text_memory.retrieve.advanced_searcher import AdvancedSearcher


Expand Down Expand Up @@ -127,7 +127,7 @@ def mem_reader(self):
return self.deps.mem_reader

@property
def mem_scheduler(self) -> BaseScheduler:
def mem_scheduler(self) -> OptimizedScheduler:
"""Get scheduler instance."""
return self.deps.mem_scheduler

Expand Down
1 change: 1 addition & 0 deletions src/memos/api/routers/product_router.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,7 @@ def get_all_memories(memory_req: GetMemoryPlaygroundRequest):
@router.post("/add", summary="add a new memory", response_model=SimpleResponse)
def create_memory(memory_req: MemoryCreateRequest):
"""Create a new memory for a specific user."""
logger.info("DIAGNOSTIC: /product/add endpoint called. This confirms the new code is deployed.")
# Initialize status_tracker outside try block to avoid NameError in except blocks
status_tracker = None

Expand Down
3 changes: 3 additions & 0 deletions src/memos/mem_os/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -788,6 +788,9 @@ def process_textual_memory():
timestamp=datetime.utcnow(),
task_id=task_id,
)
logger.info(
f"[DIAGNOSTIC] core.add: Submitting message to scheduler: {message_item.model_dump_json(indent=2)}"
)
self.mem_scheduler.memos_message_queue.submit_messages(
messages=[message_item]
)
Expand Down
2 changes: 1 addition & 1 deletion src/memos/mem_os/utils/default_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def get_default_config(
"act_mem_update_interval": kwargs.get("scheduler_act_mem_update_interval", 300),
"context_window_size": kwargs.get("scheduler_context_window_size", 5),
"thread_pool_max_workers": kwargs.get("scheduler_thread_pool_max_workers", 10),
"consume_interval_seconds": kwargs.get("scheduler_consume_interval_seconds", 3),
"consume_interval_seconds": kwargs.get("scheduler_consume_interval_seconds", 0.01),
"enable_parallel_dispatch": kwargs.get("scheduler_enable_parallel_dispatch", True),
"enable_activation_memory": True,
},
Expand Down
10 changes: 7 additions & 3 deletions src/memos/mem_scheduler/base_scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,6 @@ def __init__(self, config: BaseSchedulerConfig):
self.dispatcher = SchedulerDispatcher(
config=self.config,
memos_message_queue=self.memos_message_queue,
use_redis_queue=self.use_redis_queue,
max_workers=self.thread_pool_max_workers,
enable_parallel_dispatch=self.enable_parallel_dispatch,
status_tracker=self.status_tracker,
Expand Down Expand Up @@ -232,8 +231,8 @@ def initialize_modules(

# start queue monitor if enabled and a bot is set later

def debug_mode_on(self):
self.memos_message_queue.debug_mode_on()
def debug_mode_on(self, debug_stream_prefix="debug_mode"):
self.memos_message_queue.debug_mode_on(debug_stream_prefix=debug_stream_prefix)

def _cleanup_on_init_failure(self):
"""Clean up resources if initialization fails."""
Expand Down Expand Up @@ -594,6 +593,11 @@ def _submit_web_logs(
Args:
messages: Single log message or list of log messages
"""
messages_list = [messages] if isinstance(messages, ScheduleLogForWebItem) else messages
for message in messages_list:
logger.info(
f"[DIAGNOSTIC] base_scheduler._submit_web_logs called. Message to publish: {message.model_dump_json(indent=2)}"
)
if self.rabbitmq_config is None:
return

Expand Down
3 changes: 2 additions & 1 deletion src/memos/mem_scheduler/general_modules/scheduler_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,10 @@ def create_event_log(
metadata: list[dict],
memory_len: int,
memcube_name: str | None = None,
log_content: str | None = None,
) -> ScheduleLogForWebItem:
item = self.create_autofilled_log_item(
log_content="",
log_content=log_content or "",
label=label,
from_memory_type=from_memory_type,
to_memory_type=to_memory_type,
Expand Down
14 changes: 13 additions & 1 deletion src/memos/mem_scheduler/general_scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -367,16 +367,19 @@ def _add_message_consumer(self, messages: list[ScheduleMessageItem]) -> None:
if kb_log_content:
event = self.create_event_log(
label="knowledgeBaseUpdate",
# 1. 移除 log_content 参数
# 2. 补充 memory_type
from_memory_type=USER_INPUT_TYPE,
to_memory_type=LONG_TERM_MEMORY_TYPE,
user_id=msg.user_id,
mem_cube_id=msg.mem_cube_id,
mem_cube=self.current_mem_cube,
memcube_log_content=kb_log_content,
metadata=None, # Per design doc for KB logs
metadata=None,
memory_len=len(kb_log_content),
memcube_name=self._map_memcube_name(msg.mem_cube_id),
)
# 3. 后置赋值 log_content
event.log_content = (
f"Knowledge Base Memory Update: {len(kb_log_content)} changes."
)
Expand Down Expand Up @@ -474,6 +477,9 @@ def _add_message_consumer(self, messages: list[ScheduleMessageItem]) -> None:
logger.error(f"Error: {e}", exc_info=True)

def _mem_read_message_consumer(self, messages: list[ScheduleMessageItem]) -> None:
logger.info(
f"[DIAGNOSTIC] general_scheduler._mem_read_message_consumer called. Received messages: {[msg.model_dump_json(indent=2) for msg in messages]}"
)
logger.info(f"Messages {messages} assigned to {MEM_READ_LABEL} handler.")

def process_message(message: ScheduleMessageItem):
Expand Down Expand Up @@ -538,6 +544,9 @@ def _process_memories_with_reader(
task_id: str | None = None,
info: dict | None = None,
) -> None:
logger.info(
f"[DIAGNOSTIC] general_scheduler._process_memories_with_reader called. mem_ids: {mem_ids}, user_id: {user_id}, mem_cube_id: {mem_cube_id}, task_id: {task_id}"
)
"""
Process memories using mem_reader for enhanced memory processing.

Expand Down Expand Up @@ -635,6 +644,9 @@ def _process_memories_with_reader(
}
)
if kb_log_content:
logger.info(
f"[DIAGNOSTIC] general_scheduler._process_memories_with_reader: Creating event log for KB update. Label: knowledgeBaseUpdate, user_id: {user_id}, mem_cube_id: {mem_cube_id}, task_id: {task_id}. KB content: {json.dumps(kb_log_content, indent=2)}"
)
event = self.create_event_log(
label="knowledgeBaseUpdate",
from_memory_type=USER_INPUT_TYPE,
Expand Down
5 changes: 2 additions & 3 deletions src/memos/mem_scheduler/memory_manage_modules/retriever.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,10 +209,9 @@ def _split_batches(
def recall_for_missing_memories(
self,
query: str,
memories: list[TextualMemoryItem],
memories: list[str],
) -> tuple[str, bool]:
text_memories = [one.memory for one in memories] if memories else []
text_memories = "\n".join([f"- {mem}" for i, mem in enumerate(text_memories)])
text_memories = "\n".join([f"- {mem}" for i, mem in enumerate(memories)])

prompt = self.build_prompt(
template_name="enlarge_recall",
Expand Down
Loading
Loading