Part of #24256.
Clear up old action logs to free up storage space.
Users will see a message indicating that the log has been cleared if
they view old tasks.
<img width="1361" alt="image"
src="https://github.com/user-attachments/assets/9f0f3a3a-bc5a-402f-90ca-49282d196c22">
Docs: https://gitea.com/gitea/docs/pulls/40
---------
Co-authored-by: silverwind <me@silverwind.io>
(cherry picked from commit 687c1182482ad9443a5911c068b317a91c91d586)
Conflicts:
custom/conf/app.example.ini
routers/web/repo/actions/view.go
trivial context conflict
Fixes#22722
Currently, it is not possible to force push to a branch with branch
protection rules in place. There are often times where this is necessary
(CI workflows/administrative tasks etc).
The current workaround is to rename/remove the branch protection,
perform the force push, and then reinstate the protections.
Provide an additional section in the branch protection rules to allow
users to specify which users with push access can also force push to the
branch. The default value of the rule will be set to `Disabled`, and the
UI is intuitive and very similar to the `Push` section.
It is worth noting in this implementation that allowing force push does
not override regular push access, and both will need to be enabled for a
user to force push.
This applies to manual force push to a remote, and also in Gitea UI
updating a PR by rebase (which requires force push)
This modifies the `BranchProtection` API structs to add:
- `enable_force_push bool`
- `enable_force_push_whitelist bool`
- `force_push_whitelist_usernames string[]`
- `force_push_whitelist_teams string[]`
- `force_push_whitelist_deploy_keys bool`
<img width="943" alt="image"
src="https://github.com/go-gitea/gitea/assets/79623665/7491899c-d816-45d5-be84-8512abd156bf">
branch `test` being a protected branch:
![image](https://github.com/go-gitea/gitea/assets/79623665/e018e6e9-b7b2-4bd3-808e-4947d7da35cc)
<img width="1038" alt="image"
src="https://github.com/go-gitea/gitea/assets/79623665/57ead13e-9006-459f-b83c-7079e6f4c654">
---------
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
(cherry picked from commit 12cb1d2998f2a307713ce979f8d585711e92061c)
Fix#31657.
According to the
[doc](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#onschedule)
of GitHub Actions, The timezone for cron should be UTC, not the local
timezone. And Gitea Actions doesn't have any reasons to change this, so
I think it's a bug.
However, Gitea Actions has extended the syntax, as it supports
descriptors like `@weekly` and `@every 5m`, and supports specifying the
timezone like `TZ=UTC 0 10 * * *`. So we can make it use UTC only when
the timezone is not specified, to be compatible with GitHub Actions, and
also respect the user's specified.
It does break the feature because the times to run tasks would be
changed, and it may confuse users. So I don't think we should backport
this.
## ⚠️ BREAKING ⚠️
If the server's local time zone is not UTC, a scheduled task would run
at a different time after upgrading Gitea to this version.
(cherry picked from commit 21a73ae642b15982a911837775c9583deb47220c)
Fix#31707.
Also related to #31715.
Some Actions resources could has different types of ownership. It could
be:
- global: all repos and orgs/users can use it.
- org/user level: only the org/user can use it.
- repo level: only the repo can use it.
There are two ways to distinguish org/user level from repo level:
1. `{owner_id: 1, repo_id: 2}` for repo level, and `{owner_id: 1,
repo_id: 0}` for org level.
2. `{owner_id: 0, repo_id: 2}` for repo level, and `{owner_id: 1,
repo_id: 0}` for org level.
The first way seems more reasonable, but it may not be true. The point
is that although a resource, like a runner, belongs to a repo (it can be
used by the repo), the runner doesn't belong to the repo's org (other
repos in the same org cannot use the runner). So, the second method
makes more sense.
And the first way is not user-friendly to query, we must set the repo id
to zero to avoid wrong results.
So, #31715 should be right. And the most simple way to fix#31707 is
just:
```diff
- shared.GetRegistrationToken(ctx, ctx.Repo.Repository.OwnerID, ctx.Repo.Repository.ID)
+ shared.GetRegistrationToken(ctx, 0, ctx.Repo.Repository.ID)
```
However, it is quite intuitive to set both owner id and repo id since
the repo belongs to the owner. So I prefer to be compatible with it. If
we get both owner id and repo id not zero when creating or finding, it's
very clear that the caller want one with repo level, but set owner id
accidentally. So it's OK to accept it but fix the owner id to zero.
(cherry picked from commit a33e74d40d356e8f628ac06a131cb203a3609dec)
Fix#31137.
Replace #31623#31697.
When migrating LFS objects, if there's any object that failed (like some
objects are losted, which is not really critical), Gitea will stop
migrating LFS immediately but treat the migration as successful.
This PR checks the error according to the [LFS api
doc](https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md#successful-responses).
> LFS object error codes should match HTTP status codes where possible:
>
> - 404 - The object does not exist on the server.
> - 409 - The specified hash algorithm disagrees with the server's
acceptable options.
> - 410 - The object was removed by the owner.
> - 422 - Validation error.
If the error is `404`, it's safe to ignore it and continue migration.
Otherwise, stop the migration and mark it as failed to ensure data
integrity of LFS objects.
And maybe we should also ignore others errors (maybe `410`? I'm not sure
what's the difference between "does not exist" and "removed by the
owner".), we can add it later when some users report that they have
failed to migrate LFS because of an error which should be ignored.
(cherry picked from commit 09b56fc0690317891829906d45c1d645794c63d5)
There's already `initActionsTasks`; it will avoid additional check for
if Actions enabled to move `registerActionsCleanup` into it.
And we don't really need `OlderThanConfig`.
(cherry picked from commit f989f464386139592b6911cad1be4c901eb97fe5)
Fix#31707.
It's split from #31724.
Although #31724 could also fix#31707, it has change a lot so it's not a
good idea to backport it.
(cherry picked from commit 81fa471119a6733d257f63f8c2c1f4acc583d21b)
Fix#26685
If a commit status comes from Gitea Actions and the user cannot access
the repo's actions unit (the user does not have the permission or the
actions unit is disabled), a 404 page will occur after clicking the
"Details" link. We should hide the "Details" link in this case.
<img
src="https://github.com/go-gitea/gitea/assets/15528715/68361714-b784-4bb5-baab-efde4221f466"
width="400px" />
(cherry picked from commit 7dec8de9147b20c014d68bb1020afe28a263b95a)
Conflicts:
routers/web/repo/commit.go
trivial context commit
The previous commit laid out the foundation of the quota engine, this
one builds on top of it, and implements the actual enforcement.
Enforcement happens at the route decoration level, whenever possible. In
case of the API, when over quota, a 413 error is returned, with an
appropriate JSON payload. In case of web routes, a 413 HTML page is
rendered with similar information.
This implementation is for a **soft quota**: quota usage is checked
before an operation is to be performed, and the operation is *only*
denied if the user is already over quota. This makes it possible to go
over quota, but has the significant advantage of being practically
implementable within the current Forgejo architecture.
The goal of enforcement is to deny actions that can make the user go
over quota, and allow the rest. As such, deleting things should - in
almost all cases - be possible. A prime exemption is deleting files via
the web ui: that creates a new commit, which in turn increases repo
size, thus, is denied if the user is over quota.
Limitations
-----------
Because we generally work at a route decorator level, and rarely
look *into* the operation itself, `size:repos:public` and
`size:repos:private` are not enforced at this level, the engine enforces
against `size:repos:all`. This will be improved in the future.
AGit does not play very well with this system, because AGit PRs count
toward the repo they're opened against, while in the GitHub-style fork +
pull model, it counts against the fork. This too, can be improved in the
future.
There's very little done on the UI side to guard against going over
quota. What this patch implements, is enforcement, not prevention. The
UI will still let you *try* operations that *will* result in a denial.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
This is an implementation of a quota engine, and the API routes to
manage its settings. This does *not* contain any enforcement code: this
is just the bedrock, the engine itself.
The goal of the engine is to be flexible and future proof: to be nimble
enough to build on it further, without having to rewrite large parts of
it.
It might feel a little more complicated than necessary, because the goal
was to be able to support scenarios only very few Forgejo instances
need, scenarios the vast majority of mostly smaller instances simply do
not care about. The goal is to support both big and small, and for that,
we need a solid, flexible foundation.
There are thee big parts to the engine: counting quota use, setting
limits, and evaluating whether the usage is within the limits. Sounds
simple on paper, less so in practice!
Quota counting
==============
Quota is counted based on repo ownership, whenever possible, because
repo owners are in ultimate control over the resources they use: they
can delete repos, attachments, everything, even if they don't *own*
those themselves. They can clean up, and will always have the permission
and access required to do so. Would we count quota based on the owning
user, that could lead to situations where a user is unable to free up
space, because they uploaded a big attachment to a repo that has been
taken private since. It's both more fair, and much safer to count quota
against repo owners.
This means that if user A uploads an attachment to an issue opened
against organization O, that will count towards the quota of
organization O, rather than user A.
One's quota usage stats can be queried using the `/user/quota` API
endpoint. To figure out what's eating into it, the
`/user/repos?order_by=size`, `/user/quota/attachments`,
`/user/quota/artifacts`, and `/user/quota/packages` endpoints should be
consulted. There's also `/user/quota/check?subject=<...>` to check
whether the signed-in user is within a particular quota limit.
Quotas are counted based on sizes stored in the database.
Setting quota limits
====================
There are different "subjects" one can limit usage for. At this time,
only size-based limits are implemented, which are:
- `size:all`: As the name would imply, the total size of everything
Forgejo tracks.
- `size:repos:all`: The total size of all repositories (not including
LFS).
- `size:repos:public`: The total size of all public repositories (not
including LFS).
- `size:repos:private`: The total size of all private repositories (not
including LFS).
- `sizeall`: The total size of all git data (including all
repositories, and LFS).
- `sizelfs`: The size of all git LFS data (either in private or
public repos).
- `size:assets:all`: The size of all assets tracked by Forgejo.
- `size:assets:attachments:all`: The size of all kinds of attachments
tracked by Forgejo.
- `size:assets:attachments:issues`: Size of all attachments attached to
issues, including issue comments.
- `size:assets:attachments:releases`: Size of all attachments attached
to releases. This does *not* include automatically generated archives.
- `size:assets:artifacts`: Size of all Action artifacts.
- `size:assets:packages:all`: Size of all Packages.
- `size:wiki`: Wiki size
Wiki size is currently not tracked, and the engine will always deem it
within quota.
These subjects are built into Rules, which set a limit on *all* subjects
within a rule. Thus, we can create a rule that says: "1Gb limit on all
release assets, all packages, and git LFS, combined". For a rule to
stand, the total sum of all subjects must be below the rule's limit.
Rules are in turn collected into groups. A group is just a name, and a
list of rules. For a group to stand, all of its rules must stand. Thus,
if we have a group with two rules, one that sets a combined 1Gb limit on
release assets, all packages, and git LFS, and another rule that sets a
256Mb limit on packages, if the user has 512Mb of packages, the group
will not stand, because the second rule deems it over quota. Similarly,
if the user has only 128Mb of packages, but 900Mb of release assets, the
group will not stand, because the combined size of packages and release
assets is over the 1Gb limit of the first rule.
Groups themselves are collected into Group Lists. A group list stands
when *any* of the groups within stand. This allows an administrator to
set conservative defaults, but then place select users into additional
groups that increase some aspect of their limits.
To top it off, it is possible to set the default quota groups a user
belongs to in `app.ini`. If there's no explicit assignment, the engine
will use the default groups. This makes it possible to avoid having to
assign each and every user a list of quota groups, and only those need
to be explicitly assigned who need a different set of groups than the
defaults.
If a user has any quota groups assigned to them, the default list will
not be considered for them.
The management APIs
===================
This commit contains the engine itself, its unit tests, and the quota
management APIs. It does not contain any enforcement.
The APIs are documented in-code, and in the swagger docs, and the
integration tests can serve as an example on how to use them.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
Add an optional `order_by` parameter to the `user.ListMyRepos`
handler (which handles the `/api/v1/user/repos` route), allowing a user
to sort repos by name (the default), id, or size.
The latter will be useful later for figuring out which repos use most
space, which repos eat most into a user's quota.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
Upgrade to release-notes-assistant 1.1.1:
* multiline release notes drafts were incorrectly categorized
according the first line, instead of for each line
* when there is a backport, link the original PR first
* remove spurious </a>