MarseyWorld/files/routes/search.py

431 lines
13 KiB
Python
Raw Normal View History

2022-05-04 23:09:46 +00:00
import re
[DO NOT MERGE] import detanglation (#442) * move Base definition to files.classes.__init__.py * fix ImportError * move userpage listing to users.py * don't import the app from classes * consts: set default values to avoid crashes consts: warn if the secret key is the default config value * card view: sneed (user db schema) * cloudflare: use DEFAULT_CONFIG_VALUE * const: set default values * decouple media.py from __main__ * pass database to avoid imports * import cleanup and import request not in const, but in the requests mega import * move asset_submissions site check to __init__ * asset submissions feature flag * flag * g.is_tor * don't import request where it's not needed * i think this is fine * mail: move to own routes and helper * wrappers * required wrappers move * unfuck wrappers a bit * move snappy quotes and marseys to stateful consts * marsify * :pepodrool: * fix missing import * import cache * ...and settings.py * and static.py * static needs cache * route * lmao all of the jinja shit was in feeds.py amazing * classes should only import what they need from flask * import Response * hdjbjdhbhjf * ... * dfdfdfdf * make get a non-required import * isort imports (mostly) * but actually * configs * reload config on import * fgfgfgfg * config * config * initialize snappy and test * cookie of doom debug * edfjnkf * xikscdfd * debug config * set session cookie domain, i think this fixes the can't login bug * sdfbgnhvfdsghbnjfbdvvfghnn * hrsfxgf * dump the entire config on a request * kyskyskyskyskyskyskyskyskys * duifhdskfjdfd * dfdfdfdfdfdfdfdfdfdfdfdf * dfdfdfdf * imoprt all of the consts beacuse fuck it * 😭 * dfdfdfdfdfdfsdasdf * print the entire session * rffdfdfjkfksj * fgbhffh * not the secret keys * minor bug fixes * be helpful in the warning * gfgfgfg * move warning lower * isort main imports (i hope this doesn't fuck something up) * test * session cookie domain redux * dfdfdfd * try only importing Flask * formkeys fix * y * :pepodrool: * route helper * remove before flight * dfdfdfdfdf * isort classes * isort helpers * move check_for_alts to routehelpers and also sort imports and get rid of unused ones * that previous commit but actkally * readd the cache in a dozen places they were implicitly imported * use g.is_tor instead of request.headers. bla bla bla * upgrade streamers to their own route file * get rid of unused imports in __main__ * fgfgf * don't pull in the entire ORM where we don't need it * features * explicit imports for the get helper * explicit imports for the get helper redux * testing allroutes * remove unused import * decouple flask from classes * syntax fix also remember these have side fx for some reason (why?) * move side effects out of the class * posts * testing on devrama * settings * reloading * settingssdsdsds * streamer features * site settings * testing settings on devrama * import * fix modlog * remove debug stuff * revert commit 67275b21ab6e2f2520819e84d10bfc1c746a15b6 * archiveorg to _archiveorg * skhudkfkjfd * fix cron for PCM * fix bugs that snekky wants me to * Fix call to realbody passing db, standardize kwarg * test * import check_for_alts from the right place * cloudflare * testing on devrama * fix cron i think * shadow properly * tasks * Remove print which will surely be annoying in prod. * v and create new session * use files.classes * make errors import little and fix rare 500 in /allow_nsfw * Revert "use files.classes" This reverts commit 98c10b876cf86ce058b7fb955cf1ec0bfb9996c6. * pass v to media functions rather than using g * fix * dfdfdfdfd * cleanup, py type checking is dumb so don't use it where it causes issues * Fix some merge bugs, add DEFAULT_RATELIMIT to main. * Fix imports on sqlalchemy expressions. * `from random import random` is an error. * Fix replies db param. * errors: fix missing import * fix rare 500: only send to GIFT_NOTIF_ID if it exists, and send them the right text * Fix signup formkey. * fix 2 500s * propagate db to submissions * fix replies * dfdfdfdf * Fix verifiedcolor. * is_manual * can't use getters outside of an app context * don't attempt to do gumroad on sites where it's not enabled * don't attempt to do gumraod on sites's where it's unnecessary * Revert "don't attempt to do gumroad on sites where it's not enabled" This reverts commit 6f8a6331878655492dfaf1907b27f8be513c14d3. * fix 500 * validate media type Co-authored-by: TLSM <duolsm@outlook.com>
2022-11-15 09:19:08 +00:00
import time
from calendar import timegm
from sqlalchemy.orm import load_only
[DO NOT MERGE] import detanglation (#442) * move Base definition to files.classes.__init__.py * fix ImportError * move userpage listing to users.py * don't import the app from classes * consts: set default values to avoid crashes consts: warn if the secret key is the default config value * card view: sneed (user db schema) * cloudflare: use DEFAULT_CONFIG_VALUE * const: set default values * decouple media.py from __main__ * pass database to avoid imports * import cleanup and import request not in const, but in the requests mega import * move asset_submissions site check to __init__ * asset submissions feature flag * flag * g.is_tor * don't import request where it's not needed * i think this is fine * mail: move to own routes and helper * wrappers * required wrappers move * unfuck wrappers a bit * move snappy quotes and marseys to stateful consts * marsify * :pepodrool: * fix missing import * import cache * ...and settings.py * and static.py * static needs cache * route * lmao all of the jinja shit was in feeds.py amazing * classes should only import what they need from flask * import Response * hdjbjdhbhjf * ... * dfdfdfdf * make get a non-required import * isort imports (mostly) * but actually * configs * reload config on import * fgfgfgfg * config * config * initialize snappy and test * cookie of doom debug * edfjnkf * xikscdfd * debug config * set session cookie domain, i think this fixes the can't login bug * sdfbgnhvfdsghbnjfbdvvfghnn * hrsfxgf * dump the entire config on a request * kyskyskyskyskyskyskyskyskys * duifhdskfjdfd * dfdfdfdfdfdfdfdfdfdfdfdf * dfdfdfdf * imoprt all of the consts beacuse fuck it * 😭 * dfdfdfdfdfdfsdasdf * print the entire session * rffdfdfjkfksj * fgbhffh * not the secret keys * minor bug fixes * be helpful in the warning * gfgfgfg * move warning lower * isort main imports (i hope this doesn't fuck something up) * test * session cookie domain redux * dfdfdfd * try only importing Flask * formkeys fix * y * :pepodrool: * route helper * remove before flight * dfdfdfdfdf * isort classes * isort helpers * move check_for_alts to routehelpers and also sort imports and get rid of unused ones * that previous commit but actkally * readd the cache in a dozen places they were implicitly imported * use g.is_tor instead of request.headers. bla bla bla * upgrade streamers to their own route file * get rid of unused imports in __main__ * fgfgf * don't pull in the entire ORM where we don't need it * features * explicit imports for the get helper * explicit imports for the get helper redux * testing allroutes * remove unused import * decouple flask from classes * syntax fix also remember these have side fx for some reason (why?) * move side effects out of the class * posts * testing on devrama * settings * reloading * settingssdsdsds * streamer features * site settings * testing settings on devrama * import * fix modlog * remove debug stuff * revert commit 67275b21ab6e2f2520819e84d10bfc1c746a15b6 * archiveorg to _archiveorg * skhudkfkjfd * fix cron for PCM * fix bugs that snekky wants me to * Fix call to realbody passing db, standardize kwarg * test * import check_for_alts from the right place * cloudflare * testing on devrama * fix cron i think * shadow properly * tasks * Remove print which will surely be annoying in prod. * v and create new session * use files.classes * make errors import little and fix rare 500 in /allow_nsfw * Revert "use files.classes" This reverts commit 98c10b876cf86ce058b7fb955cf1ec0bfb9996c6. * pass v to media functions rather than using g * fix * dfdfdfdfd * cleanup, py type checking is dumb so don't use it where it causes issues * Fix some merge bugs, add DEFAULT_RATELIMIT to main. * Fix imports on sqlalchemy expressions. * `from random import random` is an error. * Fix replies db param. * errors: fix missing import * fix rare 500: only send to GIFT_NOTIF_ID if it exists, and send them the right text * Fix signup formkey. * fix 2 500s * propagate db to submissions * fix replies * dfdfdfdf * Fix verifiedcolor. * is_manual * can't use getters outside of an app context * don't attempt to do gumroad on sites where it's not enabled * don't attempt to do gumraod on sites's where it's unnecessary * Revert "don't attempt to do gumroad on sites where it's not enabled" This reverts commit 6f8a6331878655492dfaf1907b27f8be513c14d3. * fix 500 * validate media type Co-authored-by: TLSM <duolsm@outlook.com>
2022-11-15 09:19:08 +00:00
from files.helpers.regex import *
2022-07-09 10:32:49 +00:00
from files.helpers.sorting_and_time import *
2023-09-26 22:12:01 +00:00
from files.helpers.get import *
[DO NOT MERGE] import detanglation (#442) * move Base definition to files.classes.__init__.py * fix ImportError * move userpage listing to users.py * don't import the app from classes * consts: set default values to avoid crashes consts: warn if the secret key is the default config value * card view: sneed (user db schema) * cloudflare: use DEFAULT_CONFIG_VALUE * const: set default values * decouple media.py from __main__ * pass database to avoid imports * import cleanup and import request not in const, but in the requests mega import * move asset_submissions site check to __init__ * asset submissions feature flag * flag * g.is_tor * don't import request where it's not needed * i think this is fine * mail: move to own routes and helper * wrappers * required wrappers move * unfuck wrappers a bit * move snappy quotes and marseys to stateful consts * marsify * :pepodrool: * fix missing import * import cache * ...and settings.py * and static.py * static needs cache * route * lmao all of the jinja shit was in feeds.py amazing * classes should only import what they need from flask * import Response * hdjbjdhbhjf * ... * dfdfdfdf * make get a non-required import * isort imports (mostly) * but actually * configs * reload config on import * fgfgfgfg * config * config * initialize snappy and test * cookie of doom debug * edfjnkf * xikscdfd * debug config * set session cookie domain, i think this fixes the can't login bug * sdfbgnhvfdsghbnjfbdvvfghnn * hrsfxgf * dump the entire config on a request * kyskyskyskyskyskyskyskyskys * duifhdskfjdfd * dfdfdfdfdfdfdfdfdfdfdfdf * dfdfdfdf * imoprt all of the consts beacuse fuck it * 😭 * dfdfdfdfdfdfsdasdf * print the entire session * rffdfdfjkfksj * fgbhffh * not the secret keys * minor bug fixes * be helpful in the warning * gfgfgfg * move warning lower * isort main imports (i hope this doesn't fuck something up) * test * session cookie domain redux * dfdfdfd * try only importing Flask * formkeys fix * y * :pepodrool: * route helper * remove before flight * dfdfdfdfdf * isort classes * isort helpers * move check_for_alts to routehelpers and also sort imports and get rid of unused ones * that previous commit but actkally * readd the cache in a dozen places they were implicitly imported * use g.is_tor instead of request.headers. bla bla bla * upgrade streamers to their own route file * get rid of unused imports in __main__ * fgfgf * don't pull in the entire ORM where we don't need it * features * explicit imports for the get helper * explicit imports for the get helper redux * testing allroutes * remove unused import * decouple flask from classes * syntax fix also remember these have side fx for some reason (why?) * move side effects out of the class * posts * testing on devrama * settings * reloading * settingssdsdsds * streamer features * site settings * testing settings on devrama * import * fix modlog * remove debug stuff * revert commit 67275b21ab6e2f2520819e84d10bfc1c746a15b6 * archiveorg to _archiveorg * skhudkfkjfd * fix cron for PCM * fix bugs that snekky wants me to * Fix call to realbody passing db, standardize kwarg * test * import check_for_alts from the right place * cloudflare * testing on devrama * fix cron i think * shadow properly * tasks * Remove print which will surely be annoying in prod. * v and create new session * use files.classes * make errors import little and fix rare 500 in /allow_nsfw * Revert "use files.classes" This reverts commit 98c10b876cf86ce058b7fb955cf1ec0bfb9996c6. * pass v to media functions rather than using g * fix * dfdfdfdfd * cleanup, py type checking is dumb so don't use it where it causes issues * Fix some merge bugs, add DEFAULT_RATELIMIT to main. * Fix imports on sqlalchemy expressions. * `from random import random` is an error. * Fix replies db param. * errors: fix missing import * fix rare 500: only send to GIFT_NOTIF_ID if it exists, and send them the right text * Fix signup formkey. * fix 2 500s * propagate db to submissions * fix replies * dfdfdfdf * Fix verifiedcolor. * is_manual * can't use getters outside of an app context * don't attempt to do gumroad on sites where it's not enabled * don't attempt to do gumraod on sites's where it's unnecessary * Revert "don't attempt to do gumroad on sites where it's not enabled" This reverts commit 6f8a6331878655492dfaf1907b27f8be513c14d3. * fix 500 * validate media type Co-authored-by: TLSM <duolsm@outlook.com>
2022-11-15 09:19:08 +00:00
from files.routes.wrappers import *
from files.__main__ import app
2022-05-04 23:09:46 +00:00
2022-06-22 06:35:50 +00:00
valid_params = [
2022-05-04 23:09:46 +00:00
'author',
'domain',
2023-10-05 10:15:05 +00:00
'nsfw',
2022-10-02 08:55:39 +00:00
'post',
'before',
'after',
2023-05-14 22:52:15 +00:00
'exact',
2022-10-02 08:55:39 +00:00
'title',
'sentto',
'hole',
'subreddit',
2022-05-04 23:09:46 +00:00
]
def searchparse(text):
text = text.lower()
2022-05-04 23:09:46 +00:00
criteria = {x[0]:x[1] for x in query_regex.findall(text)}
for x in criteria:
if x in valid_params:
text = text.replace(f"{x}:{criteria[x]}", "")
text = text.strip()
2022-05-04 23:09:46 +00:00
if text:
criteria['full_text'] = text
criteria['q'] = []
2022-07-06 11:49:13 +00:00
for m in search_token_regex.finditer(text):
token = m[1] if m[1] else m[2]
2023-02-17 14:17:05 +00:00
if not token: token = ''
2023-09-26 22:12:01 +00:00
token = escape_for_search(token)
criteria['q'].append(token)
2022-05-04 23:09:46 +00:00
return criteria
@app.get("/search/posts")
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400)
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400, key_func=get_ID)
2022-05-04 23:09:46 +00:00
@auth_required
2023-07-30 00:42:06 +00:00
def searchposts(v):
2022-05-04 23:09:46 +00:00
query = request.values.get("q", '').strip()
2023-01-22 23:30:22 +00:00
if not query:
abort(403, "Empty searches aren't allowed!")
2022-05-04 23:09:46 +00:00
2023-05-05 05:23:59 +00:00
page = get_page()
2022-05-04 23:09:46 +00:00
sort = request.values.get("sort", "new").lower()
t = request.values.get('t', 'all').lower()
2023-08-23 21:57:39 +00:00
criteria = searchparse(query)
2022-05-04 23:09:46 +00:00
2023-06-07 23:26:32 +00:00
posts = g.db.query(Post).options(load_only(Post.id)) \
.join(Post.author) \
.filter(Post.author_id.notin_(v.userblocks))
2023-01-01 11:36:20 +00:00
2022-10-06 05:37:50 +00:00
if v.admin_level < PERMS['POST_COMMENT_MODERATION']:
posts = posts.filter(
2023-06-07 23:26:32 +00:00
Post.deleted_utc == 0,
Post.is_banned == False,
Post.private == False)
2023-01-01 11:36:20 +00:00
2022-05-04 23:09:46 +00:00
if 'author' in criteria:
author = get_user(criteria['author'], v=v)
if author.id != v.id:
posts = posts.filter(Post.ghost == False)
if not author.is_visible_to(v):
if v.client:
abort(403, f"@{author.username}'s profile is private; You can't use the 'author' syntax on them")
2022-05-04 23:09:46 +00:00
return render_template("search.html",
v=v,
query=query,
total=0,
page=page,
listing=[],
sort=sort,
t=t,
domain=None,
domain_obj=None,
error=f"@{author.username}'s profile is private; You can't use the 'author' syntax on them."
2022-10-30 07:33:42 +00:00
), 403
2023-06-07 23:26:32 +00:00
posts = posts.filter(Post.author_id == author.id)
2022-05-04 23:09:46 +00:00
2023-05-14 22:52:15 +00:00
if 'exact' in criteria and 'full_text' in criteria:
regex_str = '[[:<:]]'+criteria['full_text']+'[[:>:]]' # https://docs.oracle.com/cd/E17952_01/mysql-5.5-en/regexp.html "word boundaries"
if 'title' in criteria:
2023-06-07 23:26:32 +00:00
words = [Post.title.regexp_match(regex_str)]
2023-05-14 22:52:15 +00:00
else:
2023-06-07 23:26:32 +00:00
words = [or_(Post.title.regexp_match(regex_str), Post.body.regexp_match(regex_str))]
2023-05-14 22:52:15 +00:00
posts = posts.filter(*words)
elif 'q' in criteria:
2023-08-06 03:33:25 +00:00
if 'title' in criteria:
2023-06-07 23:26:32 +00:00
words = [or_(Post.title.ilike('%'+x+'%')) \
for x in criteria['q']]
else:
2023-02-21 18:18:42 +00:00
words = [or_(
2023-06-07 23:26:32 +00:00
Post.title.ilike('%'+x+'%'),
Post.body.ilike('%'+x+'%'),
Post.url.ilike('%'+x+'%'),
2023-02-21 18:18:42 +00:00
) for x in criteria['q']]
posts = posts.filter(*words)
2023-01-01 11:36:20 +00:00
2023-10-05 10:19:50 +00:00
if 'nsfw' in criteria: posts = posts.filter(Post.nsfw==True)
2022-05-04 23:09:46 +00:00
if 'domain' in criteria:
2023-09-21 19:36:45 +00:00
domain = criteria['domain']
2022-05-04 23:09:46 +00:00
2023-09-26 22:12:01 +00:00
domain = escape_for_search(domain)
2022-05-04 23:09:46 +00:00
2023-09-21 19:36:45 +00:00
posts = posts.filter(
2022-05-04 23:09:46 +00:00
or_(
2023-06-07 23:26:32 +00:00
Post.url.ilike("https://"+domain+'/%'),
Post.url.ilike("https://"+domain),
Post.url.ilike("https://www."+domain+'/%'),
Post.url.ilike("https://www."+domain),
Post.url.ilike("https://old." + domain + '/%'),
Post.url.ilike("https://old." + domain)
2022-05-04 23:09:46 +00:00
)
)
if 'subreddit' in criteria:
subreddit = criteria['subreddit']
if not subreddit_name_regex.fullmatch(subreddit):
abort(400, "Invalid subreddit name.")
posts = posts.filter(Post.url.ilike(f"https://old.reddit.com/r/{subreddit}/%"))
if 'hole' in criteria:
posts = posts.filter(Post.hole == criteria['hole'])
2022-05-04 23:09:46 +00:00
if 'after' in criteria:
after = criteria['after']
try: after = int(after)
except:
try: after = timegm(time.strptime(after, "%Y-%m-%d"))
except: abort(400)
2023-06-07 23:26:32 +00:00
posts = posts.filter(Post.created_utc > after)
if 'before' in criteria:
before = criteria['before']
try: before = int(before)
except:
try: before = timegm(time.strptime(before, "%Y-%m-%d"))
except: abort(400)
2023-06-07 23:26:32 +00:00
posts = posts.filter(Post.created_utc < before)
2023-06-07 23:26:32 +00:00
posts = apply_time_filter(t, posts, Post)
2022-05-04 23:09:46 +00:00
if not v.can_see_shadowbanned:
2023-06-07 23:26:32 +00:00
posts = posts.join(Post.author).filter(User.shadowbanned == None)
2022-05-04 23:09:46 +00:00
total = posts.count()
2023-06-07 23:26:32 +00:00
posts = sort_objects(sort, posts, Post)
2022-05-04 23:09:46 +00:00
posts = posts.offset(PAGE_SIZE * (page - 1)).limit(PAGE_SIZE).all()
2022-05-04 23:09:46 +00:00
ids = [x.id for x in posts]
2022-05-04 23:09:46 +00:00
2023-07-17 14:49:26 +00:00
posts = get_posts(ids, v=v)
2022-05-04 23:09:46 +00:00
if v.client: return {"total":total, "data":[x.json for x in posts]}
2022-05-04 23:09:46 +00:00
return render_template("search.html",
2023-09-29 03:40:14 +00:00
v=v,
query=query,
page=page,
listing=posts,
sort=sort,
t=t,
total=total
2022-09-04 23:15:37 +00:00
)
2022-05-04 23:09:46 +00:00
@app.get("/search/comments")
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400)
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400, key_func=get_ID)
2022-05-04 23:09:46 +00:00
@auth_required
2023-07-30 00:42:06 +00:00
def searchcomments(v):
2022-05-04 23:09:46 +00:00
query = request.values.get("q", '').strip()
2023-01-22 23:30:22 +00:00
if not query:
abort(403, "Empty searches aren't allowed!")
2022-05-04 23:09:46 +00:00
2023-05-05 05:23:59 +00:00
page = get_page()
2022-05-04 23:09:46 +00:00
sort = request.values.get("sort", "new").lower()
t = request.values.get('t', 'all').lower()
criteria = searchparse(query)
comments = g.db.query(Comment).options(load_only(Comment.id)).outerjoin(Comment.post) \
.filter(
or_(Comment.parent_post != None, Comment.wall_user_id != None),
Comment.author_id.notin_(v.userblocks),
)
2022-05-04 23:09:46 +00:00
2023-01-01 11:36:20 +00:00
if 'post' in criteria:
try: post = int(criteria['post'])
2022-10-15 10:30:13 +00:00
except: abort(404)
comments = comments.filter(Comment.parent_post == post)
2022-05-04 23:09:46 +00:00
if 'author' in criteria:
author = get_user(criteria['author'], v=v)
if author.id != v.id:
comments = comments.filter(Comment.ghost == False)
if not author.is_visible_to(v):
if v.client:
abort(403, f"@{author.username}'s profile is private; You can't use the 'author' syntax on them")
2022-05-04 23:09:46 +00:00
return render_template("search_comments.html", v=v, query=query, total=0, page=page, comments=[], sort=sort, t=t, error=f"@{author.username}'s profile is private; You can't use the 'author' syntax on them!"), 403
2022-05-04 23:09:46 +00:00
else: comments = comments.filter(Comment.author_id == author.id)
2022-10-02 08:55:39 +00:00
if 'q' in criteria:
2023-10-05 16:14:17 +00:00
tokens = map(lambda x: search_regex_1.sub('', x), criteria['q'])
tokens = filter(lambda x: len(x) > 0, tokens)
2023-10-05 16:14:17 +00:00
tokens = map(lambda x: search_regex_2.sub("\\'", x), tokens)
tokens = map(lambda x: x.strip(), tokens)
2023-10-05 16:14:17 +00:00
tokens = map(lambda x: search_regex_3.sub(' <-> ', x), tokens)
comments = comments.filter(Comment.body_ts.match(
' & '.join(tokens),
postgresql_regconfig='english'))
2022-05-04 23:09:46 +00:00
2023-10-05 10:19:50 +00:00
if 'nsfw' in criteria: comments = comments.filter(Comment.nsfw == True)
2022-05-04 23:09:46 +00:00
if 'hole' in criteria:
comments = comments.filter(Post.hole == criteria['hole'])
2022-07-09 10:32:49 +00:00
comments = apply_time_filter(t, comments, Comment)
2022-05-04 23:09:46 +00:00
2022-10-06 06:45:27 +00:00
if v.admin_level < PERMS['POST_COMMENT_MODERATION']:
2023-08-11 13:15:34 +00:00
private = [x[0] for x in g.db.query(Post.id).filter(Post.private == True)]
2022-05-04 23:09:46 +00:00
comments = comments.filter(
Comment.is_banned==False,
Comment.deleted_utc == 0,
or_(
Comment.parent_post.notin_(private),
Comment.wall_user_id != None
)
)
2022-05-04 23:09:46 +00:00
if 'after' in criteria:
after = criteria['after']
try: after = int(after)
except:
try: after = timegm(time.strptime(after, "%Y-%m-%d"))
except: abort(400)
comments = comments.filter(Comment.created_utc > after)
if 'before' in criteria:
before = criteria['before']
try: before = int(before)
except:
try: before = timegm(time.strptime(before, "%Y-%m-%d"))
except: abort(400)
comments = comments.filter(Comment.created_utc < before)
2022-05-04 23:09:46 +00:00
if not v.can_see_shadowbanned:
comments = comments.join(Comment.author).filter(User.shadowbanned == None)
2022-05-04 23:09:46 +00:00
total = comments.count()
comments = sort_objects(sort, comments, Comment)
2022-05-04 23:09:46 +00:00
comments = comments.offset(PAGE_SIZE * (page - 1)).limit(PAGE_SIZE).all()
2022-05-04 23:09:46 +00:00
ids = [x.id for x in comments]
2022-05-04 23:09:46 +00:00
comments = get_comments(ids, v=v)
if v.client: return {"total":total, "data":[x.json for x in comments]}
return render_template("search_comments.html", v=v, query=query, page=page, comments=comments, sort=sort, t=t, total=total, standalone=True)
2022-05-04 23:09:46 +00:00
2023-02-19 19:56:52 +00:00
@app.get("/search/messages")
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400)
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400, key_func=get_ID)
2023-02-19 19:56:52 +00:00
@auth_required
2023-07-30 00:42:06 +00:00
def searchmessages(v):
2023-02-19 19:56:52 +00:00
query = request.values.get("q", '').strip()
if not query:
abort(403, "Empty searches aren't allowed!")
2023-05-05 05:23:59 +00:00
page = get_page()
2023-02-19 19:56:52 +00:00
sort = request.values.get("sort", "new").lower()
t = request.values.get('t', 'all').lower()
criteria = searchparse(query)
dm_conditions = [Comment.author_id == v.id, Comment.sentto == v.id]
if v.admin_level >= PERMS['VIEW_MODMAIL']:
dm_conditions.append(Comment.sentto == MODMAIL_ID),
comments = g.db.query(Comment).options(load_only(Comment.id)) \
2023-02-19 19:56:52 +00:00
.filter(
Comment.sentto != None,
Comment.parent_post == None,
or_(*dm_conditions),
2023-02-19 19:56:52 +00:00
)
if 'author' in criteria:
comments = comments.filter(Comment.ghost == False)
author = get_user(criteria['author'], v=v)
2023-02-19 19:56:52 +00:00
if not author.is_visible_to(v):
if v.client:
abort(403, f"@{author.username}'s profile is private; You can't use the 'author' syntax on them")
return render_template("search_comments.html", v=v, query=query, total=0, page=page, comments=[], sort=sort, t=t, error=f"@{author.username}'s profile is private; You can't use the 'author' syntax on them!"), 403
2023-02-19 19:56:52 +00:00
else: comments = comments.filter(Comment.author_id == author.id)
if 'q' in criteria:
2023-10-05 16:14:17 +00:00
tokens = map(lambda x: search_regex_1.sub('', x), criteria['q'])
2023-02-19 19:56:52 +00:00
tokens = filter(lambda x: len(x) > 0, tokens)
2023-10-05 16:14:17 +00:00
tokens = map(lambda x: search_regex_2.sub("\\'", x), tokens)
2023-02-19 19:56:52 +00:00
tokens = map(lambda x: x.strip(), tokens)
2023-10-05 16:14:17 +00:00
tokens = map(lambda x: search_regex_3.sub(' <-> ', x), tokens)
2023-02-19 19:56:52 +00:00
comments = comments.filter(Comment.body_ts.match(
' & '.join(tokens),
postgresql_regconfig='english'))
comments = apply_time_filter(t, comments, Comment)
if 'after' in criteria:
after = criteria['after']
try: after = int(after)
except:
try: after = timegm(time.strptime(after, "%Y-%m-%d"))
except: abort(400)
comments = comments.filter(Comment.created_utc > after)
if 'before' in criteria:
before = criteria['before']
try: before = int(before)
except:
try: before = timegm(time.strptime(before, "%Y-%m-%d"))
except: abort(400)
comments = comments.filter(Comment.created_utc < before)
if 'sentto' in criteria:
sentto = criteria['sentto']
2023-05-09 18:39:56 +00:00
sentto = get_user(sentto, graceful=True)
if not sentto:
2023-05-09 18:59:41 +00:00
abort(400, "The `sentto` field must contain an existing user's username!")
2023-05-09 18:39:56 +00:00
comments = comments.filter(Comment.sentto == sentto.id)
if not v.can_see_shadowbanned:
comments = comments.join(Comment.author).filter(User.shadowbanned == None)
2023-02-19 19:56:52 +00:00
total = comments.count()
comments = sort_objects(sort, comments, Comment)
2023-02-19 19:56:52 +00:00
comments = comments.offset(PAGE_SIZE * (page - 1)).limit(PAGE_SIZE).all()
2023-02-19 19:56:52 +00:00
2023-03-25 18:25:38 +00:00
for x in comments: x.unread = True
2023-05-05 21:45:25 +00:00
comments = dict.fromkeys([x.top_comment for x in comments])
2023-02-19 19:56:52 +00:00
if v.client: return {"total":total, "data":[x.json for x in comments]}
return render_template("search_comments.html", v=v, query=query, page=page, comments=comments, sort=sort, t=t, total=total, standalone=True, render_replies=True)
2023-02-19 19:56:52 +00:00
2022-05-04 23:09:46 +00:00
@app.get("/search/users")
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400)
@limiter.limit(DEFAULT_RATELIMIT, deduct_when=lambda response: response.status_code < 400, key_func=get_ID)
2022-05-04 23:09:46 +00:00
@auth_required
2023-07-30 00:42:06 +00:00
def searchusers(v):
2022-05-04 23:09:46 +00:00
query = request.values.get("q", '').strip()
if not query:
abort(403, "Empty searches aren't allowed!")
2022-05-04 23:09:46 +00:00
2023-05-05 05:23:59 +00:00
page = get_page()
2023-03-16 06:27:58 +00:00
users = g.db.query(User)
2023-01-01 11:36:20 +00:00
criteria = searchparse(query)
if 'after' in criteria:
after = criteria['after']
try: after = int(after)
except:
try: after = timegm(time.strptime(after, "%Y-%m-%d"))
except: abort(400)
users = users.filter(User.created_utc > after)
if 'before' in criteria:
before = criteria['before']
try: before = int(before)
except:
try: before = timegm(time.strptime(before, "%Y-%m-%d"))
except: abort(400)
users = users.filter(User.created_utc < before)
2023-01-01 11:36:20 +00:00
if 'q' in criteria:
term = criteria['q'][0]
2023-09-26 22:12:01 +00:00
term = sanitize_username(term)
2023-01-01 11:36:20 +00:00
users = users.filter(
or_(
User.username.ilike(f'%{term}%'),
2023-05-13 04:53:14 +00:00
User.original_username.ilike(f'%{term}%'),
User.prelock_username.ilike(f'%{term}%'),
)
).order_by(User.username.ilike(term).desc(), User.stored_subscriber_count.desc())
total = users.count()
2023-01-01 11:36:20 +00:00
users = users.offset(PAGE_SIZE * (page-1)).limit(PAGE_SIZE).all()
2022-05-04 23:09:46 +00:00
if v.client: return {"data": [x.json for x in users]}
return render_template("search_users.html", v=v, query=query, page=page, users=users, total=total)