I am also ‘Andrew’, the admin of this server. I’ll try to remember to only use this account for posting stuff.

  • 0 Posts
  • 51 Comments
Joined 9 months ago
cake
Cake day: February 17th, 2025

help-circle

  • It’s straight-forward enough to do in back-end code, to just reject a query if parameters are missing, but I don’t think there’s a way to define a schema that then gets used to auto-generate the documentation and validate the requests. If the request isn’t validated, then the back-end never sees it.

    For something like https://freamon.github.io/piefed-api/#/Misc/get_api_alpha_search, the docs show that ‘q’ and ‘type_’ are required, and everything else is optional. The schema definition looks like:

    /api/alpha/search:
        get:
          parameters:
            - in: query
              name: q
              schema:
                type: string
              required: true
            - in: query
              name: type_
              schema:
                type: string
                enum:
                  - Communities
                  - Posts
                  - Users
                  - Url
              required: true
            - in: query
              name: limit
              schema:
                type: integer
              required: false
    

    required is a simple boolean for each individual field - you can say every field is required, or no fields are required, but I haven’t come across a way to say that at least one field is required.


  • PieFed has a similar API endpoint. It used to be scoped, but was changed at the request of app developers. It’s how people browse sites by ‘New Comments’, and - for a GET request - it’s not really possible to document and validate that an endpoint needs to have at least one of something (i.e. that none of ‘post_id’ or ‘user_id’ or ‘community_id’ or ‘user_id’ are individually required, but there needs to be one of them).

    It’s unlikely that these crawlers will discover PieFed’s API, but I guess it’s no surprise that they’ve moved on from basic HTML crawling to probing APIs. In the meantime, I’ve added some basic protection to the back-end for anonymous, unscoped requests to PieFed’s endpoint.




  • I’ll just remove the ‘freamon’ one when the auto-generated one is up to date.

    The manually-generated one had 5 missing routes, which I’ve since added.

    The auto-generated one at crust has about 48 missing routes. It’s the right approach, and I’ll help out with it when I can, but - for now at least - it makes no sense to redirect people to it (either automatically or via a comment).


    Some thoughts for @wjs018@piefed.social

    /site/instance_chooser probably doesn’t need to be a route. It’s just the data format returned by /site/instance_chooser_search. As a route, it’s returning the instance info for the site you’re querying, so if you want to keep it as a route, it should probably be called /site/instance_info or something.

    In the query for /site/instance_chooser_search, nsfw and newbie are both booleans. With the rest of the API, these are sent as ‘true’ or ‘false’, but they are ‘yes’ and ‘no’ for this route. The newbie query should probably be newbie_friendly In the response, monthsmonitored should probably be months_monitored

    There’s no way to exclude communities for the response to /topic/list and /feed/list: If you don’t put ‘include_communities’ in the query, it’s defaults to True, but if you put ‘include_communities=false’ in the query it ends up being True also (because the word ‘include_communities’ is in the data).



  • I’d be wary of getting a conversation node from anybody other than the original author (as described in the second approach).

    There’s a reason why, if you want to resolve a missing post in Lemmy, etc, you have to use the fedi-link to retrieve it from its source, not just from any other instance that has a copy (because, like the “context owner”, they could be lying).

    For Group-based apps, conversation backfill is mostly an issue for new instances, who might have a community’s posts (from its outbox), but will be missing old comments. Comments can be automatically and recursively retrieved when they are replied to or upvoted by a remote actor, but fetching from the source (as you arguably should do) is complicated by instances closing (there’s still loads of comments from feddit.de and kbin.social out there - it will be much worse when lemm.ee disappears). So perhaps Lemmy could also benefit from post authors being considered the trusted owner of any comments they receive.


  • What is the update delay for Fediseer?

    I don’t know. It’s not something I’m familiar with - it might just default to saying ‘closed’ if it doesn’t have the data.

    It’s interesting that the obvious bot accounts on those instances were set up in mid-March last year, so I’m guessing that these are somebody’s army that they’ve used before, but overplayed their hand when they turned it on the DonaldJMusk person. The admins can reasonably be blamed for setting up instances with open registrations and no protections and then forgetting about them, but I’d be wary of blaming them for being behind the attack directly. The ‘nicole’ person is unlikely to have used their own instance - it’s probably just someone with the same MO as whoever owns the bots, finding and exploiting vulnerable instances.












  • Clarkson has been trying to warn us for years, and we haven’t been listening. He punched a producer when his dinner wasn’t on time, to highlight the impending delays to food deliveries after Brexit. He left the BBC to work for Amazon, presenting a show that was a shadow of it’s former self, to illustrate how billionaires diminish everything they touch.

    He couldn’t be clearer with his messaging, but the UK continues to ignore him.