Scraping Google search results in parallel using proxies #4003
-
The title has the summary, I have a working google scraper version using this way (with SB() as sb:) and so on how can i solve this? I even used user-data-dir but this made google catches me, and used block_images attribute to be on for one of the instances and others not (like change attributes) but no solution also can you tell me best approach here? Thanks in advance |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
For multithreading with stealth, see this |
Beta Was this translation helpful? Give feedback.
For multithreading with stealth, see this
sb_cdp
example:SeleniumBase/examples/cdp_mode/raw_multi_captcha.py