Skip to content

Sourcery refactored main branch#1

Open
sourcery-ai[bot] wants to merge 1 commit intomainfrom
sourcery/main
Open

Sourcery refactored main branch#1
sourcery-ai[bot] wants to merge 1 commit intomainfrom
sourcery/main

Conversation

@sourcery-ai
Copy link

@sourcery-ai sourcery-ai bot commented Jun 28, 2023

Branch main refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the main branch, then run:

git fetch origin sourcery/main
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from SilentDemonSD June 28, 2023 08:02
Comment on lines -62 to +66

def decrypt(string):
return base64.b64decode(string[::-1][24:-20]).decode('utf-8')


Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function scrapeIndex refactored with the following changes:

Comment on lines -319 to +318
bluelist = []
for ele in soup:
bluelist.append(ele.get('href'))
bluelist = [ele.get('href') for ele in soup]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function igggames refactored with the following changes:

Comment on lines -375 to +372
if not (next_s and isinstance(next_s,NavigableString)):
if not next_s or not isinstance(next_s, NavigableString):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function scrappers refactored with the following changes:

Comment on lines -522 to +517
datalist = []
for ele in soup:
datalist.append(ele.get("value"))

datalist = [ele.get("value") for ele in soup]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function getfinal refactored with the following changes:

Comment on lines -564 to +555
datalist = []
for ele in soup:
datalist.append(ele.get("value"))
datalist = [ele.get("value") for ele in soup]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function getfirst refactored with the following changes:

Comment on lines -1163 to +1132
info_parsed = {}
for i in range(0, len(f), 3):
info_parsed[f[i].lower().replace(' ', '_')] = f[i+2]
return info_parsed
return {f[i].lower().replace(' ', '_'): f[i+2] for i in range(0, len(f), 3)}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function parse_info_sharer refactored with the following changes:

Comment on lines -1194 to +1166
res = client.post(url+'/dl', headers=headers, data=data).json()
res = client.post(f'{url}/dl', headers=headers, data=data).json()
except:
return info_parsed
if 'url' in res and res['url']:
info_parsed['error'] = False
info_parsed['gdrive_link'] = res['url']
if len(ddl_btn) and not forced_login and not 'url' in info_parsed:
if len(ddl_btn) and not forced_login and 'url' not in info_parsed:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function sharer_pw refactored with the following changes:

Comment on lines -1303 to +1271
bypassed_url = client.post(domain+"links/go", data=data, headers=headers).json()["url"]
bypassed_url = client.post(
f"{domain}links/go", data=data, headers=headers
).json()["url"]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function gplinks refactored with the following changes:

Comment on lines -1314 to +1282

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function droplink refactored with the following changes:

Comment on lines -1342 to +1309
if response["success"]: return response["destination"]
else: return response["msg"]
return response["destination"] if response["success"] else response["msg"]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function linkvertise refactored with the following changes:

Comment on lines -1366 to +1341
url_base += matches[0]+'/'
url_base += f'{matches[0]}/'
params = matches[1]
res = client.get(url_base+'anchor', params=params)
res = client.get(f'{url_base}anchor', params=params)
token = re.findall(r'"recaptcha-token" value="(.*?)"', res.text)[0]
params = dict(pair.split('=') for pair in params.split('&'))
post_data = post_data.format(params["v"], token, params["k"], params["co"])
res = client.post(url_base+'reload', params=f'k={params["k"]}', data=post_data)
answer = re.findall(r'"rresp","(.*?)"', res.text)[0]
return answer
res = client.post(
f'{url_base}reload', params=f'k={params["k"]}', data=post_data
)
return re.findall(r'"rresp","(.*?)"', res.text)[0]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function RecaptchaV3 refactored with the following changes:

else: TERA_COOKIE = {"ndus": ndus}


TERA_COOKIE = None if ndus is None else {"ndus": ndus}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 19-22 refactored with the following changes:

Comment on lines -167 to +165
link = findall(r'\bhttps?://.*\.uptobox\.com/dl\S+', url)
if link: return link[0]
if link := findall(r'\bhttps?://.*\.uptobox\.com/dl\S+', url):
return link[0]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function uptobox refactored with the following changes:

Comment on lines -198 to +210
final_link = findall(r'https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+', url)
if final_link: return final_link[0]
if final_link := findall(
r'https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+', url
):
return final_link[0]
cget = create_scraper().request
try:
url = cget('get', url).url
page = cget('get', url).text
except Exception as e:
return (f"ERROR: {e.__class__.__name__}")
final_link = findall(r"\'(https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+)\'", page)
if not final_link:return ("ERROR: No links found in this page")
return final_link[0]
if final_link := findall(
r"\'(https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+)\'", page
):
return final_link[0]
else:
return ("ERROR: No links found in this page")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function mediafire refactored with the following changes:

Comment on lines -264 to +267
direct_link = findall(r"(https?://letsupload\.io\/.+?)\'", res.text)
if direct_link: return direct_link[0]
if direct_link := findall(r"(https?://letsupload\.io\/.+?)\'", res.text):
if direct_link: return direct_link[0]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function letsupload refactored with the following changes:

Comment on lines -380 to +384
direct_link = html_tree.xpath("//a[contains(@id,'uniqueExpirylink')]/@href")
if direct_link:
if direct_link := html_tree.xpath(
"//a[contains(@id,'uniqueExpirylink')]/@href"
):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function racaty refactored with the following changes:

Comment on lines -416 to +432
dl_url = soup.find("a", {"class": "ok btn-general btn-orange"})["href"]
if dl_url: return dl_url
if dl_url := soup.find("a", {"class": "ok btn-general btn-orange"})[
"href"
]:
return dl_url
return (
"ERROR: Unable to generate Direct Link 1fichier!")
elif len(soup.find_all("div", {"class": "ct_warn"})) == 3:
str_2 = soup.find_all("div", {"class": "ct_warn"})[-1]
if "you must wait" in str(str_2).lower():
numbers = [int(word) for word in str(str_2).split() if word.isdigit()]
if numbers: return (
f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.")
if numbers := [
int(word) for word in str(str_2).split() if word.isdigit()
]:
if numbers: return (
f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function fichier refactored with the following changes:

Comment on lines -530 to +540
if TERA_COOKIE is None: return f"Terabox Cookie is not Set"
if TERA_COOKIE is None:
return "Terabox Cookie is not Set"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function terabox refactored with the following changes:

Comment on lines -687 to +699
direct_link = html_tree.xpath("//a[contains(@class,'btn btn-dow')]/@href")
if direct_link:
if direct_link := html_tree.xpath(
"//a[contains(@class,'btn btn-dow')]/@href"
):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function akmfiles refactored with the following changes:

Comment on lines -33 to +43
urls = []
if otherss: texts = message.caption
else: texts = message.text

texts = message.caption if otherss else message.text
if texts in [None,""]: return
for ele in texts.split():
if "http://" in ele or "https://" in ele:
urls.append(ele)
if len(urls) == 0: return
urls = [ele for ele in texts.split() if "http://" in ele or "https://" in ele]
if not urls: return

if bypasser.ispresent(ddllist,urls[0]):
msg = app.send_message(message.chat.id, "⚡ __generating...__", reply_to_message_id=message.id)
elif urls[0] in "https://olamovies" or urls[0] in "https://psa.pm/":
msg = app.send_message(message.chat.id, "🔎 __this might take some time...__", reply_to_message_id=message.id)
else:
if urls[0] in "https://olamovies" or urls[0] in "https://psa.pm/":
msg = app.send_message(message.chat.id, "🔎 __this might take some time...__", reply_to_message_id=message.id)
else:
msg = app.send_message(message.chat.id, "🔎 __bypassing...__", reply_to_message_id=message.id)
msg = app.send_message(message.chat.id, "🔎 __bypassing...__", reply_to_message_id=message.id)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function loopthread refactored with the following changes:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants