Current logic unconditionally adds public adressing to "cc"
and follower adressing to "to" after attempting to strip it
from the other one. This creates serious problems:
First the bug prompting this investigation and fix,
unconditional addition creates duplicates when adressing
URIs already were in their intended final field; e.g.
this is prominently the case for all "unlisted" posts.
Since List.delete only removes the first occurence,
this then broke follower-adress stripping later on
making the policy ineffective.
It’s also just not safe in general wrt to non-public adressing:
e.g. pre-existing duplicates didn’t get fully stripped,
bespoke adressing modes with only one of public addressing
or follower addressing are mangled — and most importantly:
any belatedly received DM or follower-only post
also got public adressing added!
Shockingly this last point was actually asserted as "correct" in tests;
it appears to be a mistake from mindless match adjustments
while fixing crashes on nil adressing in
10c792110e.
Clean up this sloppy logic up, making sure no more duplicates are
added by us, all instances of relevant adresses are purged and only
readded when they actually existed to begin with.
Current AP spec demands anonymous objects to have an id value,
but explicitly set it to JSON null. Howeveras it turns out this is
incompatible with JSON-LD requiring `@id` to be a string and thus AP
spec is incompatible iwth the Ativity Streams spec it is based on.
This is an issue for (the few) AP implementers actually performing
JSON-LD processing, like IceShrimp.NET.
This was uncovered by IceShrimp.NET’s zotan due to our adoption of
anonymous objects for emoj in f101886709.
The issues is being discussed by W3C, and will most likely be resolved
via an errata redefining anonymous objects to completely omit the id
field just like transient objects already do. See:
https://github.com/w3c/activitypub/issues/476
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/848
Ever since the browser frontend switcher was introduced in
de64c6c54a /akkoma counts as
an API prefix and thus gets skipped by frontend plugs
breaking the old swagger ui path of /akkoma/swagger-ui.
Do the simple thing and change the frontend path to
/pleroma/swaggerui which isn't an API path and can't collide
with frontend user paths given pleroma is areserved nickname.
Reported in
https://meta.akkoma.dev/t/view-all-endpoints/269/7https://meta.akkoma.dev/t/swagger-ui-not-loading/728
Usually an id should point to another AP object
and the image file isn’t an AP object. We currently
do not provide standalone AP objects for emoji and
don't keep track of remote emoji at all.
Thus just federate them as anonymous objects,
i.e. objects only existing within a parent context
and using an explicit null id.
IceShrimp.NET previously adopted anonymous objects
for remote emoji without any apparent issues. See:
333611f65e
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/694
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.
Apparently long-lived Phoenix websocket processes are known to
often cause exactly this by getting into a state unfavourable
for the garbage collector.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.
Testing on one affected instance appears to confirm this theory
Ref.:
https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafterhttps://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060
Tested-by: bjo
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.
Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.
Implementation notes:
- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering
Overall performance of timelines and creating new posts which contain URLs is greatly improved.