* support changing label colors
* support changing issue state
* use helpers to keep type conversions DRY
* drop the x/exp license because it is no longer used
The tests are performed by the gof3 compliance suite
If the assign the pull request review to a team, it did not show the
members of the team in the "requested_reviewers" field, so the field was
null. As a solution, I added the team members to the array.
fix#31764
(cherry picked from commit 94cca8846e7d62c8a295d70c8199d706dfa60e5c)
This reverts commit 4ed372af13.
This change from Gitea was not considered by the Forgejo UI team and there is a consensus that it feels like a regression.
The test which was added in that commit is kept and modified to test that reviews can successfully be submitted on closed and merged PRs.
Closesforgejo/design#11
ForkRepository performs two different functions:
* The fork itself, if it does not already exist
* Updates and notifications after the fork is performed
The function is split to reflect that and otherwise unmodified.
The two function are given different names to:
* clarify which integration tests provides coverage
* distinguish it from the notification method by the same name
Previous arch package grouping was not well-suited for complex or multi-architecture environments. It now supports the following content:
- Support grouping by any path.
- New support for packages in `xz` format.
- Fix clean up rules
<!--start release-notes-assistant-->
## Draft release notes
<!--URL:https://codeberg.org/forgejo/forgejo-->
- Features
- [PR](https://codeberg.org/forgejo/forgejo/pulls/4903): <!--number 4903 --><!--line 0 --><!--description c3VwcG9ydCBncm91cGluZyBieSBhbnkgcGF0aCBmb3IgYXJjaCBwYWNrYWdl-->support grouping by any path for arch package<!--description-->
<!--end release-notes-assistant-->
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4903
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Exploding Dragon <explodingfkl@gmail.com>
Co-committed-by: Exploding Dragon <explodingfkl@gmail.com>
- Fix "WARNING: item list for enum is not a valid JSON array, using the
old deprecated format" messages from
https://github.com/go-swagger/go-swagger in the CI.
- parsing scopes in `grantAdditionalScopes`
- read basic user info if `read:user`
- fail reading repository info if only `read:user`
- read repository info if `read:repository`
- if `setting.OAuth2.EnabledAdditionalGrantScopes` not provided it reads
all groups (public+private)
- if `setting.OAuth2.EnabledAdditionalGrantScopes` provided it reads
only public groups
- if `setting.OAuth2.EnabledAdditionalGrantScopes` and `read:organization`
provided it reads all groups
- `CheckOAuthAccessToken` returns both user ID and additional scopes
- `grantAdditionalScopes` returns AccessTokenScope ready string (grantScopes)
compiled from requested additional scopes by the client
- `userIDFromToken` sets returned grantScopes (if any) instead of default `all`
Was facing issues while writing unit tests for federation code. Mocks weren't catching all network calls, because was being out of scope of the mocking infra. Plus, I think we can have more granular tests.
This PR puts the client behind an interface, that can be retrieved from `ctx`. Context doesn't require initialization, as it defaults to the implementation available in-tree. It may be overridden when required (like testing).
## Mechanism
1. Get client factory from `ctx` (factory contains network and crypto parameters that are needed)
2. Initialize client with sender's keys and the receiver's public key
3. Use client as before.
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4853
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Aravinth Manivannan <realaravinth@batsense.net>
Co-committed-by: Aravinth Manivannan <realaravinth@batsense.net>
- Adjust the `RepoRefByType` middleware to allow for commit SHAs that
are as short as 4 characters (the minium that Git requires).
- Integration test added.
- Follow up to 4d76bbeda7
- Resolves#4781
Part of #24256.
Clear up old action logs to free up storage space.
Users will see a message indicating that the log has been cleared if
they view old tasks.
<img width="1361" alt="image"
src="https://github.com/user-attachments/assets/9f0f3a3a-bc5a-402f-90ca-49282d196c22">
Docs: https://gitea.com/gitea/docs/pulls/40
---------
Co-authored-by: silverwind <me@silverwind.io>
(cherry picked from commit 687c1182482ad9443a5911c068b317a91c91d586)
Conflicts:
custom/conf/app.example.ini
routers/web/repo/actions/view.go
trivial context conflict
Fix#31137.
Replace #31623#31697.
When migrating LFS objects, if there's any object that failed (like some
objects are losted, which is not really critical), Gitea will stop
migrating LFS immediately but treat the migration as successful.
This PR checks the error according to the [LFS api
doc](https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md#successful-responses).
> LFS object error codes should match HTTP status codes where possible:
>
> - 404 - The object does not exist on the server.
> - 409 - The specified hash algorithm disagrees with the server's
acceptable options.
> - 410 - The object was removed by the owner.
> - 422 - Validation error.
If the error is `404`, it's safe to ignore it and continue migration.
Otherwise, stop the migration and mark it as failed to ensure data
integrity of LFS objects.
And maybe we should also ignore others errors (maybe `410`? I'm not sure
what's the difference between "does not exist" and "removed by the
owner".), we can add it later when some users report that they have
failed to migrate LFS because of an error which should be ignored.
(cherry picked from commit 09b56fc0690317891829906d45c1d645794c63d5)
There's already `initActionsTasks`; it will avoid additional check for
if Actions enabled to move `registerActionsCleanup` into it.
And we don't really need `OlderThanConfig`.
(cherry picked from commit f989f464386139592b6911cad1be4c901eb97fe5)
The previous commit laid out the foundation of the quota engine, this
one builds on top of it, and implements the actual enforcement.
Enforcement happens at the route decoration level, whenever possible. In
case of the API, when over quota, a 413 error is returned, with an
appropriate JSON payload. In case of web routes, a 413 HTML page is
rendered with similar information.
This implementation is for a **soft quota**: quota usage is checked
before an operation is to be performed, and the operation is *only*
denied if the user is already over quota. This makes it possible to go
over quota, but has the significant advantage of being practically
implementable within the current Forgejo architecture.
The goal of enforcement is to deny actions that can make the user go
over quota, and allow the rest. As such, deleting things should - in
almost all cases - be possible. A prime exemption is deleting files via
the web ui: that creates a new commit, which in turn increases repo
size, thus, is denied if the user is over quota.
Limitations
-----------
Because we generally work at a route decorator level, and rarely
look *into* the operation itself, `size:repos:public` and
`size:repos:private` are not enforced at this level, the engine enforces
against `size:repos:all`. This will be improved in the future.
AGit does not play very well with this system, because AGit PRs count
toward the repo they're opened against, while in the GitHub-style fork +
pull model, it counts against the fork. This too, can be improved in the
future.
There's very little done on the UI side to guard against going over
quota. What this patch implements, is enforcement, not prevention. The
UI will still let you *try* operations that *will* result in a denial.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
This is an implementation of a quota engine, and the API routes to
manage its settings. This does *not* contain any enforcement code: this
is just the bedrock, the engine itself.
The goal of the engine is to be flexible and future proof: to be nimble
enough to build on it further, without having to rewrite large parts of
it.
It might feel a little more complicated than necessary, because the goal
was to be able to support scenarios only very few Forgejo instances
need, scenarios the vast majority of mostly smaller instances simply do
not care about. The goal is to support both big and small, and for that,
we need a solid, flexible foundation.
There are thee big parts to the engine: counting quota use, setting
limits, and evaluating whether the usage is within the limits. Sounds
simple on paper, less so in practice!
Quota counting
==============
Quota is counted based on repo ownership, whenever possible, because
repo owners are in ultimate control over the resources they use: they
can delete repos, attachments, everything, even if they don't *own*
those themselves. They can clean up, and will always have the permission
and access required to do so. Would we count quota based on the owning
user, that could lead to situations where a user is unable to free up
space, because they uploaded a big attachment to a repo that has been
taken private since. It's both more fair, and much safer to count quota
against repo owners.
This means that if user A uploads an attachment to an issue opened
against organization O, that will count towards the quota of
organization O, rather than user A.
One's quota usage stats can be queried using the `/user/quota` API
endpoint. To figure out what's eating into it, the
`/user/repos?order_by=size`, `/user/quota/attachments`,
`/user/quota/artifacts`, and `/user/quota/packages` endpoints should be
consulted. There's also `/user/quota/check?subject=<...>` to check
whether the signed-in user is within a particular quota limit.
Quotas are counted based on sizes stored in the database.
Setting quota limits
====================
There are different "subjects" one can limit usage for. At this time,
only size-based limits are implemented, which are:
- `size:all`: As the name would imply, the total size of everything
Forgejo tracks.
- `size:repos:all`: The total size of all repositories (not including
LFS).
- `size:repos:public`: The total size of all public repositories (not
including LFS).
- `size:repos:private`: The total size of all private repositories (not
including LFS).
- `sizeall`: The total size of all git data (including all
repositories, and LFS).
- `sizelfs`: The size of all git LFS data (either in private or
public repos).
- `size:assets:all`: The size of all assets tracked by Forgejo.
- `size:assets:attachments:all`: The size of all kinds of attachments
tracked by Forgejo.
- `size:assets:attachments:issues`: Size of all attachments attached to
issues, including issue comments.
- `size:assets:attachments:releases`: Size of all attachments attached
to releases. This does *not* include automatically generated archives.
- `size:assets:artifacts`: Size of all Action artifacts.
- `size:assets:packages:all`: Size of all Packages.
- `size:wiki`: Wiki size
Wiki size is currently not tracked, and the engine will always deem it
within quota.
These subjects are built into Rules, which set a limit on *all* subjects
within a rule. Thus, we can create a rule that says: "1Gb limit on all
release assets, all packages, and git LFS, combined". For a rule to
stand, the total sum of all subjects must be below the rule's limit.
Rules are in turn collected into groups. A group is just a name, and a
list of rules. For a group to stand, all of its rules must stand. Thus,
if we have a group with two rules, one that sets a combined 1Gb limit on
release assets, all packages, and git LFS, and another rule that sets a
256Mb limit on packages, if the user has 512Mb of packages, the group
will not stand, because the second rule deems it over quota. Similarly,
if the user has only 128Mb of packages, but 900Mb of release assets, the
group will not stand, because the combined size of packages and release
assets is over the 1Gb limit of the first rule.
Groups themselves are collected into Group Lists. A group list stands
when *any* of the groups within stand. This allows an administrator to
set conservative defaults, but then place select users into additional
groups that increase some aspect of their limits.
To top it off, it is possible to set the default quota groups a user
belongs to in `app.ini`. If there's no explicit assignment, the engine
will use the default groups. This makes it possible to avoid having to
assign each and every user a list of quota groups, and only those need
to be explicitly assigned who need a different set of groups than the
defaults.
If a user has any quota groups assigned to them, the default list will
not be considered for them.
The management APIs
===================
This commit contains the engine itself, its unit tests, and the quota
management APIs. It does not contain any enforcement.
The APIs are documented in-code, and in the swagger docs, and the
integration tests can serve as an example on how to use them.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
- add package counter to repo/user/org overview pages
- add go unit tests for repo/user has/count packages
- add many more unit tests for packages model
- fix error for non-existing packages in DeletePackageByID and SetRepositoryLink
Replace a double select with a simple select.
The complication originates from the initial implementation which
deleted packages instead of selecting them. It was justified to
workaround a problem in MySQL. But it is just a waste of resources
when collecting a list of IDs.
- In the spirit of #4635
- Notify the owner when their account is getting enrolled into TOTP. The
message is changed according if they have security keys or not.
- Integration test added.
- Regression of #4635
- The authentication mails weren't being sent with links to the
instance, because the the wrong variable was used in the mail footer.
`$.AppUrl` should've been `AppUrl`.
- Unit test added.
- Currently if the password, primary mail, TOTP or security keys are
changed, no notification is made of that and makes compromising an
account a bit easier as it's essentially undetectable until the original
person tries to log in. Although other changes should be made as
well (re-authing before allowing a password change), this should go a
long way of improving the account security in Forgejo.
- Adds a mail notification for password and primary mail changes. For
the primary mail change, a mail notification is sent to the old primary
mail.
- Add a mail notification when TOTP or a security keys is removed, if no
other 2FA method is configured the mail will also contain that 2FA is
no longer needed to log into their account.
- `MakeEmailAddressPrimary` is refactored to the user service package,
as it now involves calling the mailer service.
- Unit tests added.
- Integration tests added.
This leverages the existing `sync_external_users` cron job to
synchronize the `IsActive` flag on users who use an OAuth2 provider set
to synchronize. This synchronization is done by checking for expired
access tokens, and using the stored refresh token to request a new
access token. If the response back from the OAuth2 provider is the
`invalid_grant` error code, the user is marked as inactive. However, the
user is able to reactivate their account by logging in the web browser
through their OAuth2 flow.
Also changed to support this is that a linked `ExternalLoginUser` is
always created upon a login or signup via OAuth2.
Ideally, we would also refresh permissions from the configured OAuth
provider (e.g., admin, restricted and group mappings) to match the
implementation of LDAP. However, the OAuth library used for this `goth`,
doesn't seem to support issuing a session via refresh tokens. The
interface provides a [`RefreshToken`
method](https://github.com/markbates/goth/blob/master/provider.go#L20),
but the returned `oauth.Token` doesn't implement the `goth.Session` we
would need to call `FetchUser`. Due to specific implementations, we
would need to build a compatibility function for every provider, since
they cast to concrete types (e.g.
[Azure](https://github.com/markbates/goth/blob/master/providers/azureadv2/azureadv2.go#L132))
---------
Co-authored-by: Kyle D <kdumontnu@gmail.com>
(cherry picked from commit 416c36f3034e228a27258b5a8a15eec4e5e426ba)
Conflicts:
- tests/integration/auth_ldap_test.go
Trivial conflict resolved by manually applying the change.
- routers/web/auth/oauth.go
Technically not a conflict, but the original PR removed the
modules/util import, which in our version, is still in use. Added it
back.
Make it posible to let mails show e.g.:
`Max Musternam (via gitea.kithara.com) <gitea@kithara.com>`
Docs: https://gitea.com/gitea/docs/pulls/23
---
*Sponsored by Kithara Software GmbH*
(cherry picked from commit 0f533241829d0d48aa16a91e7dc0614fe50bc317)
Conflicts:
- services/mailer/mail_release.go
services/mailer/mail_test.go
In both cases, applied the changes manually.
When the title of an issue or a pull request is changed, the edited
event must be triggered, in the same way it is when the body of the
description is changed.
The web endpoints and the API endpoints for both pull requests and
issues rely on issue_service.ChangeTitle which calls
notify_service.IssueChangeTitle.
Before we had just the plain mail address as recipient. But now we provide additional Information for the Mail clients.
---
Porting information:
- Two behavior changes are noted with this patch, the display name is now always quoted although in some scenarios unnecessary it's a safety precaution of Go. B encoding is used when certain characters are present as they aren't 'legal' to be used as a display name and Q encoding would still show them and B encoding needs to be used, this is now done by Go's `address.String()`.
- Update and add new unit tests.
Co-authored-by: 6543 <6543@obermui.de>
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/4516
Reviewed-by: Earl Warren <earl-warren@noreply.codeberg.org>
Co-authored-by: Gusted <postmaster@gusted.xyz>
Co-committed-by: Gusted <postmaster@gusted.xyz>
- We were previously using `github.com/keybase/go-crypto`, because the
package for openpgp by Go itself is deprecated and no longer
maintained. This library provided a maintained version of the openpgp
package. However, it hasn't seen any activity for the last five years,
and I would therefore consider this also unmaintained.
- This patch switches the package to `github.com/ProtonMail/go-crypto`
which provides a maintained version of the openpgp package and was
already being used in the tests.
- Adds unit tests, I've carefully checked the callstacks to ensure the
OpenPGP-related code was covered under either a unit test or integration
tests to avoid regression, as this can easily turn into security
vulnerabilities if a regression happens here.
- Small behavior update, revocations are now checked correctly instead
of checking if they merely exist and the expiry time of a subkey is used
if one is provided (this is just cosmetic and doesn't impact security).
- One more dependency eliminated :D